[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 1834 - Still Failing!

2014-09-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1834/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.client.solrj.TestLBHttpSolrServer.testReliability

Error Message:
No live SolrServers available to handle this request

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request
at 
__randomizedtesting.SeedInfo.seed([2E5BBD83E82DC359:EF9360C5494B12F0]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:528)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
at 
org.apache.solr.client.solrj.TestLBHttpSolrServer.testReliability(TestLBHttpSolrServer.java:223)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLe

[jira] [Comment Edited] (SOLR-6491) Add preferredLeader as a ROLE and a collections API command to respect this role

2014-09-15 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134978#comment-14134978
 ] 

Noble Paul edited comment on SOLR-6491 at 9/16/14 5:23 AM:
---

It was just an FYI . Just in case it helps you

BTW , to add to Shalin's point . I'm not yet convinced that the performance hit 
is significant enough  . Do we have any real users reporting this as a problem?


was (Author: noble.paul):
It was just an FYI . Just in case it helps you

BTW , to add to Shalin's point . I'm not yet convinced that the performance not 
is significant enough  . Do we have any real users reporting this as a problem?

> Add preferredLeader as a ROLE and a collections API command to respect this 
> role
> 
>
> Key: SOLR-6491
> URL: https://issues.apache.org/jira/browse/SOLR-6491
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 4.11, 5.0
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>
> Leaders can currently get out of balance due to the sequence of how nodes are 
> brought up in a cluster. For very good reasons shard leadership cannot be 
> permanently assigned.
> However, it seems reasonable that a sys admin could optionally specify that a 
> particular node be the _preferred_ leader for a particular collection/shard. 
> During leader election, preference would be given to any node so marked when 
> electing any leader.
> So the proposal here is to add another role for preferredLeader to the 
> collections API, something like
> ADDROLE?role=preferredLeader&collection=collection_name&shard=shardId
> Second, it would be good to have a new collections API call like 
> ELECTPREFERREDLEADERS?collection=collection_name
> (I really hate that name so far, but you see the idea). That command would 
> (asynchronously?) make an attempt to transfer leadership for each shard in a 
> collection to the leader labeled as the preferred leader by the new ADDROLE 
> role.
> I'm going to start working on this, any suggestions welcome!
> This will subsume several other JIRAs, I'll link them momentarily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6491) Add preferredLeader as a ROLE and a collections API command to respect this role

2014-09-15 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134978#comment-14134978
 ] 

Noble Paul edited comment on SOLR-6491 at 9/16/14 5:20 AM:
---

It was just an FYI . Just in case it helps you

BTW , to add to Shalin's point . I'm not yet convinced that the performance not 
is significant enough  . Do we have any real users reporting this as a problem?


was (Author: noble.paul):
It was just an FYI . Just in case it helps you

> Add preferredLeader as a ROLE and a collections API command to respect this 
> role
> 
>
> Key: SOLR-6491
> URL: https://issues.apache.org/jira/browse/SOLR-6491
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 4.11, 5.0
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>
> Leaders can currently get out of balance due to the sequence of how nodes are 
> brought up in a cluster. For very good reasons shard leadership cannot be 
> permanently assigned.
> However, it seems reasonable that a sys admin could optionally specify that a 
> particular node be the _preferred_ leader for a particular collection/shard. 
> During leader election, preference would be given to any node so marked when 
> electing any leader.
> So the proposal here is to add another role for preferredLeader to the 
> collections API, something like
> ADDROLE?role=preferredLeader&collection=collection_name&shard=shardId
> Second, it would be good to have a new collections API call like 
> ELECTPREFERREDLEADERS?collection=collection_name
> (I really hate that name so far, but you see the idea). That command would 
> (asynchronously?) make an attempt to transfer leadership for each shard in a 
> collection to the leader labeled as the preferred leader by the new ADDROLE 
> role.
> I'm going to start working on this, any suggestions welcome!
> This will subsume several other JIRAs, I'll link them momentarily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6491) Add preferredLeader as a ROLE and a collections API command to respect this role

2014-09-15 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134978#comment-14134978
 ] 

Noble Paul commented on SOLR-6491:
--

It was just an FYI . Just in case it helps you

> Add preferredLeader as a ROLE and a collections API command to respect this 
> role
> 
>
> Key: SOLR-6491
> URL: https://issues.apache.org/jira/browse/SOLR-6491
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 4.11, 5.0
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>
> Leaders can currently get out of balance due to the sequence of how nodes are 
> brought up in a cluster. For very good reasons shard leadership cannot be 
> permanently assigned.
> However, it seems reasonable that a sys admin could optionally specify that a 
> particular node be the _preferred_ leader for a particular collection/shard. 
> During leader election, preference would be given to any node so marked when 
> electing any leader.
> So the proposal here is to add another role for preferredLeader to the 
> collections API, something like
> ADDROLE?role=preferredLeader&collection=collection_name&shard=shardId
> Second, it would be good to have a new collections API call like 
> ELECTPREFERREDLEADERS?collection=collection_name
> (I really hate that name so far, but you see the idea). That command would 
> (asynchronously?) make an attempt to transfer leadership for each shard in a 
> collection to the leader labeled as the preferred leader by the new ADDROLE 
> role.
> I'm going to start working on this, any suggestions welcome!
> This will subsume several other JIRAs, I'll link them momentarily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6491) Add preferredLeader as a ROLE and a collections API command to respect this role

2014-09-15 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134975#comment-14134975
 ] 

Erick Erickson commented on SOLR-6491:
--

[~noble.paul] right, but AFAIK, the Overseer stuff isn't particularly sensitive 
to collections and shards. Is there a notion of assigning an overseer role for 
a particular node but _only_ for a particular collection/shard combination? All 
you've got is the role and the node, I don't see any way to say "add a role for 
_this_ collection and _this_ shard.

Of course if there is a way I've just wasted a bunch of time.

[~shalinmangar] I'm trusting some folks who are reporting the edge case, hoping 
they'll chime in with "real world". But even if not, "an extra thread or two" 
times 100 shards can mount up.

> Add preferredLeader as a ROLE and a collections API command to respect this 
> role
> 
>
> Key: SOLR-6491
> URL: https://issues.apache.org/jira/browse/SOLR-6491
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 4.11, 5.0
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>
> Leaders can currently get out of balance due to the sequence of how nodes are 
> brought up in a cluster. For very good reasons shard leadership cannot be 
> permanently assigned.
> However, it seems reasonable that a sys admin could optionally specify that a 
> particular node be the _preferred_ leader for a particular collection/shard. 
> During leader election, preference would be given to any node so marked when 
> electing any leader.
> So the proposal here is to add another role for preferredLeader to the 
> collections API, something like
> ADDROLE?role=preferredLeader&collection=collection_name&shard=shardId
> Second, it would be good to have a new collections API call like 
> ELECTPREFERREDLEADERS?collection=collection_name
> (I really hate that name so far, but you see the idea). That command would 
> (asynchronously?) make an attempt to transfer leadership for each shard in a 
> collection to the leader labeled as the preferred leader by the new ADDROLE 
> role.
> I'm going to start working on this, any suggestions welcome!
> This will subsume several other JIRAs, I'll link them momentarily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6482) Add an onlyIfDown flag for DELETEREPLICA collections API command

2014-09-15 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134963#comment-14134963
 ] 

Shalin Shekhar Mangar commented on SOLR-6482:
-

I'm a little confused about this feature. You say that deleting index 
automatically is scary and then you talk about ZK as truth and about replicas 
coming back up. What is the use-case behind onlyIfDown? Why would anyone invoke 
DELETEREPLICA against a replica if they don't want it to be removed from 
cluster state?

> Add an onlyIfDown flag for DELETEREPLICA collections API command
> 
>
> Key: SOLR-6482
> URL: https://issues.apache.org/jira/browse/SOLR-6482
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.11, 5.0
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-6482.patch, SOLR-6482.patch, SOLR-6482.patch
>
>
> Having the DELETEREPLICA delete the index is scary for some situations, 
> especially ones in which the operations people are taking more explicit 
> control of the topology of their cluster. As we move towards ZK being the 
> "one source of truth" and deleting replicas that then come back up, this is 
> even scarier.
> I propose to have an optional flag onlyIfDown that remove the replica from 
> the ZK cluster state if (and only if) the node was offline. Default value: 
> false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6491) Add preferredLeader as a ROLE and a collections API command to respect this role

2014-09-15 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134953#comment-14134953
 ] 

Noble Paul edited comment on SOLR-6491 at 9/16/14 4:54 AM:
---

SOLR-5476 , SOLR-5893 does the same for overseer


was (Author: noble.paul):
SOLR-5476 does the same for overseer

> Add preferredLeader as a ROLE and a collections API command to respect this 
> role
> 
>
> Key: SOLR-6491
> URL: https://issues.apache.org/jira/browse/SOLR-6491
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 4.11, 5.0
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>
> Leaders can currently get out of balance due to the sequence of how nodes are 
> brought up in a cluster. For very good reasons shard leadership cannot be 
> permanently assigned.
> However, it seems reasonable that a sys admin could optionally specify that a 
> particular node be the _preferred_ leader for a particular collection/shard. 
> During leader election, preference would be given to any node so marked when 
> electing any leader.
> So the proposal here is to add another role for preferredLeader to the 
> collections API, something like
> ADDROLE?role=preferredLeader&collection=collection_name&shard=shardId
> Second, it would be good to have a new collections API call like 
> ELECTPREFERREDLEADERS?collection=collection_name
> (I really hate that name so far, but you see the idea). That command would 
> (asynchronously?) make an attempt to transfer leadership for each shard in a 
> collection to the leader labeled as the preferred leader by the new ADDROLE 
> role.
> I'm going to start working on this, any suggestions welcome!
> This will subsume several other JIRAs, I'll link them momentarily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6491) Add preferredLeader as a ROLE and a collections API command to respect this role

2014-09-15 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134954#comment-14134954
 ] 

Shalin Shekhar Mangar commented on SOLR-6491:
-

bq. Since updates do put some extra load on the leader, this can create an odd 
load distribution.

What's the extra load? Every replica writes to the transaction log and the 
index. The leader has an extra thread or two to write updates to replicas but 
that is it. I don't see why that should be expensive or create undue load. 
We've had bugs in the past such as SOLR-6136 but if there are others then we 
should find those and fix them before we go through with this feature.

> Add preferredLeader as a ROLE and a collections API command to respect this 
> role
> 
>
> Key: SOLR-6491
> URL: https://issues.apache.org/jira/browse/SOLR-6491
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 4.11, 5.0
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>
> Leaders can currently get out of balance due to the sequence of how nodes are 
> brought up in a cluster. For very good reasons shard leadership cannot be 
> permanently assigned.
> However, it seems reasonable that a sys admin could optionally specify that a 
> particular node be the _preferred_ leader for a particular collection/shard. 
> During leader election, preference would be given to any node so marked when 
> electing any leader.
> So the proposal here is to add another role for preferredLeader to the 
> collections API, something like
> ADDROLE?role=preferredLeader&collection=collection_name&shard=shardId
> Second, it would be good to have a new collections API call like 
> ELECTPREFERREDLEADERS?collection=collection_name
> (I really hate that name so far, but you see the idea). That command would 
> (asynchronously?) make an attempt to transfer leadership for each shard in a 
> collection to the leader labeled as the preferred leader by the new ADDROLE 
> role.
> I'm going to start working on this, any suggestions welcome!
> This will subsume several other JIRAs, I'll link them momentarily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6491) Add preferredLeader as a ROLE and a collections API command to respect this role

2014-09-15 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134953#comment-14134953
 ] 

Noble Paul commented on SOLR-6491:
--

SOLR-5476 does the same for overseer

> Add preferredLeader as a ROLE and a collections API command to respect this 
> role
> 
>
> Key: SOLR-6491
> URL: https://issues.apache.org/jira/browse/SOLR-6491
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 4.11, 5.0
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>
> Leaders can currently get out of balance due to the sequence of how nodes are 
> brought up in a cluster. For very good reasons shard leadership cannot be 
> permanently assigned.
> However, it seems reasonable that a sys admin could optionally specify that a 
> particular node be the _preferred_ leader for a particular collection/shard. 
> During leader election, preference would be given to any node so marked when 
> electing any leader.
> So the proposal here is to add another role for preferredLeader to the 
> collections API, something like
> ADDROLE?role=preferredLeader&collection=collection_name&shard=shardId
> Second, it would be good to have a new collections API call like 
> ELECTPREFERREDLEADERS?collection=collection_name
> (I really hate that name so far, but you see the idea). That command would 
> (asynchronously?) make an attempt to transfer leadership for each shard in a 
> collection to the leader labeled as the preferred leader by the new ADDROLE 
> role.
> I'm going to start working on this, any suggestions welcome!
> This will subsume several other JIRAs, I'll link them momentarily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5302) Analytics Component

2014-09-15 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134951#comment-14134951
 ] 

Erick Erickson commented on SOLR-5302:
--

Man! If you only knew how long this has been on my back burner

Thanks! AFAIK this is certainly the consensus approach, then use the pluggable 
analytics stuff that Joel put together to support distributed stats.



> Analytics Component
> ---
>
> Key: SOLR-5302
> URL: https://issues.apache.org/jira/browse/SOLR-5302
> Project: Solr
>  Issue Type: New Feature
>Reporter: Steven Bower
>Assignee: Erick Erickson
> Fix For: 5.0
>
> Attachments: SOLR-5302.patch, SOLR-5302.patch, SOLR-5302.patch, 
> SOLR-5302.patch, SOLR-5302_contrib.patch, Search Analytics Component.pdf, 
> Statistical Expressions.pdf, solr_analytics-2013.10.04-2.patch
>
>
> This ticket is to track a "replacement" for the StatsComponent. The 
> AnalyticsComponent supports the following features:
> * All functionality of StatsComponent (SOLR-4499)
> * Field Faceting (SOLR-3435)
> ** Support for limit
> ** Sorting (bucket name or any stat in the bucket
> ** Support for offset
> * Range Faceting
> ** Supports all options of standard range faceting
> * Query Faceting (SOLR-2925)
> * Ability to use overall/field facet statistics as input to range/query 
> faceting (ie calc min/max date and then facet over that range
> * Support for more complex aggregate/mapping operations (SOLR-1622)
> ** Aggregations: min, max, sum, sum-of-square, count, missing, stddev, mean, 
> median, percentiles
> ** Operations: negation, abs, add, multiply, divide, power, log, date math, 
> string reversal, string concat
> ** Easily pluggable framework to add additional operations
> * New / cleaner output format
> Outstanding Issues:
> * Multi-value field support for stats (supported for faceting)
> * Multi-shard support (may not be possible for some operations, eg median)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6115) Cleanup enum/string action types in Overseer, OverseerCollectionProcessor and CollectionHandler

2014-09-15 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134948#comment-14134948
 ] 

Shalin Shekhar Mangar commented on SOLR-6115:
-

All tests and precommit pass. I'll commit this soon.

> Cleanup enum/string action types in Overseer, OverseerCollectionProcessor and 
> CollectionHandler
> ---
>
> Key: SOLR-6115
> URL: https://issues.apache.org/jira/browse/SOLR-6115
> Project: Solr
>  Issue Type: Task
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: 4.9, 5.0
>
> Attachments: SOLR-6115.patch
>
>
> The enum/string handling for actions in Overseer and OCP is a mess. We should 
> fix it.
> From: 
> https://issues.apache.org/jira/browse/SOLR-5466?focusedCommentId=13918059&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13918059
> {quote}
> I started to untangle the fact that we have all the strings in 
> OverseerCollectionProcessor, but also have a nice CollectionAction enum. And 
> the commands are intermingled with parameters, it all seems rather confusing. 
> Does it make sense to use the enum rather than the strings? Or somehow 
> associate the two? Probably something for another JIRA though...
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6512) Add a collections API call to attach arbitrary properties to a replica

2014-09-15 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-6512:
-
Attachment: SOLR-6512.patch

Well, it took me more than a 'minute'. This patch (haven't checked it over 
entirely yet) that does the following:

1> Allows an arbitrary role to be assigned to a particular replica for a 
collection/slice.
2> Defaults to 'only one per slice'. Thus if you set the same property on a 
second node in a slice, it is removed from the first one.
3> Allows <2> to be overridden by a param multiplePerSlice=true. 
4> Throws an error for <3> if the role in question is in a list of known roles 
that should have one role per slice. "preferredLeader" is the one and only role 
in this list at present.

I'll look at it in the morning and no doubt see stuff I want to change. That 
said, AFAIK it's quite close to being ready to commit, so speak up now if there 
are issues.

> Add a collections API call to attach arbitrary properties to a replica
> --
>
> Key: SOLR-6512
> URL: https://issues.apache.org/jira/browse/SOLR-6512
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-6512.patch
>
>
> This is a sub-task for SOLR-6491, but seems generally useful. 
> Since this is in support of the "preferredLeader" functionality, I've run 
> into some considerations that I wanted some feedback on how to handle.
> "preferredLeader" has the restriction that there should only be one per 
> slice, so setting this for a particular node means removing the property for 
> all the other replicas on the slice. Not a problem to do, my question is more 
> whether this is something reasonable to enforce on an arbitrary property 
> based on what that property is? Perfectly do-able, but "semantically 
> challenged". Currently, this is never a node with "preferedLeader" set to 
> "false", it is forcibly removed from other nodes in the slice when this 
> property is assigned.
> The problem here is that there's nothing about assigning an arbitrary 
> property to a node that would reasonably imply this kind of behavior. One 
> could always control this with secondary flags on the command, e.g. 
> "shardExclusive=true|false" for instance, perhaps with safety checks in for 
> known one-per-shard properties like "preferredLeader".
> "preferredLeader" seems to fit more naturally into a "role", but currently 
> ADDROLE and DELTEROLE have nothing to do with the notion of setting a role 
> for a particular node relative to a collection/shard. Easy enough to add, but 
> enforcing the "only one node per slice may have this role" rule there is 
> similarly arbitrary and overloads the ADDROLE/DELETEROLE in a way that seems 
> equally confusing. Plus, checking whether the required collection/shard/node 
> params are present becomes based on the value of the property being set, 
> which is all equally arbitrary.
> The other interesting thing is that setting an arbitrary property on a node 
> would allow one to mess things up royally by, say, changing properties like 
> "core", or "base_url" or node_name at will. Actually this is potentially 
> useful, but very, very dangerous and I'm not particularly interested in 
> supporting it ;).  I suppose we could require a prefix, say the only settable 
> properties are "property.whatever".
> We could also add something specific to nodes, something like 
> ADDREPLICAROLE/DELETEREPLICAROLE, perhaps with sub-params like 
> "onlyOneAllowedPerShard", but this gets messy and relies on the users "doing 
> the right thing". I prefer enforcing rules like this  based on the role I 
> think. Or at least enforcing these kinds of requirements on the 
> "preferredLeader" role if we go that way.
> What are people's thoughts here? I think I'm tending towards the 
> ADDREPLICAROLE/DELETEREPLICAROLE way of doing this, but it's not set in 
> stone. I have code locally for arbitrary properties that I can modify for the 
> role bits.
> So, if I'm going to summarize the points I'd like feedback on:
> 1> Is setting arbitrary properties on a node desirable? If so, should we 
> require a prefix like "property" to prevent resetting values SolrCloud 
> depends on?
> 2> Is it better to piggyback on ADDROLE/DELETEROLE? Personally I'm not in 
> favor of this one. Too messy with requiring additional parameters to work 
> right in this case
> 3> Is the best option to create new collections API calls for 
> ADDREPLICAROLE/DELETEREPLICAROLE that
> 3.1> require collection/slice/node parameters
> 3.2> enforces the "onlyOnePerShard" rule for certain known roles
> 3.3 v1> allows users to specify arbitrary roles something like 
> "onlyOnePerShard" as an optional T|F parameter, otherwise is totally open.
> -or-
> 3.3 v2> No sup

[jira] [Commented] (SOLR-6500) Refactor FileFetcher in SnapPuller, add debug logging

2014-09-15 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134909#comment-14134909
 ] 

Varun Thacker commented on SOLR-6500:
-

Refactoring looks good!

> Refactor FileFetcher in SnapPuller, add debug logging
> -
>
> Key: SOLR-6500
> URL: https://issues.apache.org/jira/browse/SOLR-6500
> Project: Solr
>  Issue Type: Improvement
>  Components: replication (java), SolrCloud
>Reporter: Ramkumar Aiyengar
>Priority: Minor
>
> I was debugging some replication slowness and felt the need for some debug 
> statements in this code path, which then pointed me to a lot of repeated code 
> between local fs and directory file fetching logic in SnapPuller (for which 
> there was a TODO as well), so went ahead and refactored that as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5302) Analytics Component

2014-09-15 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-5302:
---
Attachment: SOLR-5302_contrib.patch

As the first step of finally getting this into 4x, here's a patch that moves 
the analytics component to contrib (which seems to be the consensus).

> Analytics Component
> ---
>
> Key: SOLR-5302
> URL: https://issues.apache.org/jira/browse/SOLR-5302
> Project: Solr
>  Issue Type: New Feature
>Reporter: Steven Bower
>Assignee: Erick Erickson
> Fix For: 5.0
>
> Attachments: SOLR-5302.patch, SOLR-5302.patch, SOLR-5302.patch, 
> SOLR-5302.patch, SOLR-5302_contrib.patch, Search Analytics Component.pdf, 
> Statistical Expressions.pdf, solr_analytics-2013.10.04-2.patch
>
>
> This ticket is to track a "replacement" for the StatsComponent. The 
> AnalyticsComponent supports the following features:
> * All functionality of StatsComponent (SOLR-4499)
> * Field Faceting (SOLR-3435)
> ** Support for limit
> ** Sorting (bucket name or any stat in the bucket
> ** Support for offset
> * Range Faceting
> ** Supports all options of standard range faceting
> * Query Faceting (SOLR-2925)
> * Ability to use overall/field facet statistics as input to range/query 
> faceting (ie calc min/max date and then facet over that range
> * Support for more complex aggregate/mapping operations (SOLR-1622)
> ** Aggregations: min, max, sum, sum-of-square, count, missing, stddev, mean, 
> median, percentiles
> ** Operations: negation, abs, add, multiply, divide, power, log, date math, 
> string reversal, string concat
> ** Easily pluggable framework to add additional operations
> * New / cleaner output format
> Outstanding Issues:
> * Multi-value field support for stats (supported for faceting)
> * Multi-shard support (may not be possible for some operations, eg median)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_20) - Build # 11255 - Failure!

2014-09-15 Thread Steve Rowe
I committed a fix. - Steve

On Sep 15, 2014, at 10:40 PM, Policeman Jenkins Server  
wrote:

> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11255/
> Java: 64bit/jdk1.8.0_20 -XX:-UseCompressedOops -XX:+UseG1GC
> 
> All tests passed
> 
> Build Log:
> [...truncated 26157 lines...]
> BUILD FAILED
> /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:491: The 
> following error occurred while executing this line:
> /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:85: The 
> following error occurred while executing this line:
> /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build.xml:256: The 
> following error occurred while executing this line:
> /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/common-build.xml:2140:
>  The following error occurred while executing this line:
> /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/common-build.xml:1835:
>  Rat problems were found!
> 
> Total time: 102 minutes 47 seconds
> Build step 'Invoke Ant' marked build as failure
> [description-setter] Description set: Java: 64bit/jdk1.8.0_20 
> -XX:-UseCompressedOops -XX:+UseG1GC
> Archiving artifacts
> Recording test results
> Email was triggered for: Failure - Any
> Sending email for trigger: Failure - Any
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_20) - Build # 11255 - Failure!

2014-09-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11255/
Java: 64bit/jdk1.8.0_20 -XX:-UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 26157 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:491: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:85: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build.xml:256: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/common-build.xml:2140:
 The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/common-build.xml:1835:
 Rat problems were found!

Total time: 102 minutes 47 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 64bit/jdk1.8.0_20 
-XX:-UseCompressedOops -XX:+UseG1GC
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Comment Edited] (SOLR-5986) Don't allow runaway queries from harming Solr cluster health or search performance

2014-09-15 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134487#comment-14134487
 ] 

Anshum Gupta edited comment on SOLR-5986 at 9/16/14 2:34 AM:
-

Thanks Robert for the review. Updated the patch with most of your feedback + 
recommendations.

I didn't change the design from using a ThreadLocal to passing the timeout in 
the constructor as the user might pre-construct and reuse the reader/searcher. 
(Solr specifically wouldn't play well with such a change).

I did consider doing it that way but wasn't able figure out a way to do that. 
I'll just spend some more time to see if there's a clean way to do it (have the 
timeout be a passed parameter to the constructor instead of being ThreadLocal).

About the test relying on system clock, not really that much as the sleep in 
wrapped up .next() and the timeout values are not even close.

Also, I can't figure out how to view the review on the review board (doesn't 
show up for me and the mail went to spam until Steve mentioned about the 
email). I've posted the updated patch there to make it easier for everyone to 
look at it.


was (Author: anshumg):
Thanks Robert for the review. Updated the patch with most of your feedback + 
recommendations.

I didn't change the design from using a ThreadLocal to passing the timeout in 
the constructor as the user might pre-construct and reuse the reader/searcher. 
(Solr specifically wouldn't play well with such a change).

I did consider doing it that way but wasn't able figure out a way to do that. 
I'll just spend some more time to see if there's a clean way to do it.

About the test relying on system clock, not really that much as the sleep in 
wrapped up .next() and the timeout values are not even close.

Also, I can't figure out how to view the review on the review board (doesn't 
show up for me and the mail went to spam until Steve mentioned about the 
email). I've posted the updated patch there to make it easier for everyone to 
look at it.

> Don't allow runaway queries from harming Solr cluster health or search 
> performance
> --
>
> Key: SOLR-5986
> URL: https://issues.apache.org/jira/browse/SOLR-5986
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: Steve Davids
>Assignee: Anshum Gupta
>Priority: Critical
> Fix For: 4.10
>
> Attachments: SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, 
> SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, 
> SOLR-5986.patch
>
>
> The intent of this ticket is to have all distributed search requests stop 
> wasting CPU cycles on requests that have already timed out or are so 
> complicated that they won't be able to execute. We have come across a case 
> where a nasty wildcard query within a proximity clause was causing the 
> cluster to enumerate terms for hours even though the query timeout was set to 
> minutes. This caused a noticeable slowdown within the system which made us 
> restart the replicas that happened to service that one request, the worst 
> case scenario are users with a relatively low zk timeout value will have 
> nodes start dropping from the cluster due to long GC pauses.
> [~amccurry] Built a mechanism into Apache Blur to help with the issue in 
> BLUR-142 (see commit comment for code, though look at the latest code on the 
> trunk for newer bug fixes).
> Solr should be able to either prevent these problematic queries from running 
> by some heuristic (possibly estimated size of heap usage) or be able to 
> execute a thread interrupt on all query threads once the time threshold is 
> met. This issue mirrors what others have discussed on the mailing list: 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/200903.mbox/%3c856ac15f0903272054q2dbdbd19kea3c5ba9e105b...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6354) Support stats over functions

2014-09-15 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-6354:
---
Attachment: SOLR-6354.patch


First start at a patch along the lines of the original idea i outlined (now 
that SOLR-6507 has cleaned up the duplicate/broken localparams code).

In this patch, you can see the logic for figuring out if/when we're dealing 
with field faceting vs query/function faceting.  In the test changes, you can 
see it being smart about requests to facet over a function that really just 
normalizes away to being a single field


The next steps from here are labeled with nocommits -- start propogating the 
StatsField instance directly to the various classes that deal with 
StatsValuesFactory so it can see if/when we need a schema based StatsValue 
instance, and when it should return a (new) StatsValue class that deals 
directly with a ValueSource



bq. do you think some of this work would make it easier for us to do stats on 
scores? Scores mean something in my application and I want to use them in the 
Stats component.

in the same sense that you can use \{!frange\} to filter on the scores of an 
arbitrary query, we should ultimately be able to compute stats on the scores of 
an arbitrary query -- but done in a second pass, regardless of wether or not 
hte specified query is the same as the "main" query.   

ie, something like this should work

{noformat}
  q = foo bar^34 baz
  stats = true
stats.field = {!query key=score_stats v=$q}
{noformat}

...just as well as something like this...

{noformat}
  q = foo bar^3.4 +baz
  stats = true
stats.field = {!lucene key=some_other_query_score_stats}yak^1.2 +zazz
{noformat}

...but the first won't be doing anything special to compute the stats "on the 
fly" as documents are collected.


> Support stats over functions
> 
>
> Key: SOLR-6354
> URL: https://issues.apache.org/jira/browse/SOLR-6354
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Hoss Man
> Attachments: SOLR-6354.patch, TstStatsComponent.java
>
>
> The majority of the logic in StatsValuesFactory for dealing with stats over 
> fields just uses the ValueSource API.  There's very little reason we can't 
> generalize this to support computing aggregate stats over any arbitrary 
> function (or the scores from an arbitrary query).
> Example...
> {noformat}
> stats.field={!func key=mean_rating 
> mean=true}prod(user_rating,pow(editor_rating,2))
> {noformat}
> ...would mean that we can compute a conceptual "rating" for each doc by 
> multiplying the user_rating field by the square of the editor_rating field, 
> and then we'd compute the mean of that "rating" across all docs in the set 
> and return it as "mean_rating"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5820) SuggestStopFilter should have a factory

2014-09-15 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved LUCENE-5820.

   Resolution: Fixed
Fix Version/s: 5.0
   4.11

Committed to trunk and branch_4x.

Thanks Varun!

> SuggestStopFilter should have a factory
> ---
>
> Key: LUCENE-5820
> URL: https://issues.apache.org/jira/browse/LUCENE-5820
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Varun Thacker
>Assignee: Steve Rowe
>Priority: Minor
> Fix For: 4.11, 5.0
>
> Attachments: LUCENE-5820.patch, LUCENE-5820.patch, LUCENE-5820.patch, 
> LUCENE-5820.patch
>
>
> While trying to use the new Suggester in Solr I realized that 
> SuggestStopFilter did not have a factory. We should add one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5820) SuggestStopFilter should have a factory

2014-09-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134814#comment-14134814
 ] 

ASF subversion and git services commented on LUCENE-5820:
-

Commit 1625200 from [~sar...@syr.edu] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1625200 ]

LUCENE-5820: SuggestStopFilter should have a factory (merged trunk r1625193, 
r1625194, and r1625197)

> SuggestStopFilter should have a factory
> ---
>
> Key: LUCENE-5820
> URL: https://issues.apache.org/jira/browse/LUCENE-5820
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Varun Thacker
>Assignee: Steve Rowe
>Priority: Minor
> Attachments: LUCENE-5820.patch, LUCENE-5820.patch, LUCENE-5820.patch, 
> LUCENE-5820.patch
>
>
> While trying to use the new Suggester in Solr I realized that 
> SuggestStopFilter did not have a factory. We should add one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-4.x #703: POMs out of sync

2014-09-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-4.x/703/

2 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandlerBackup.org.apache.solr.handler.TestReplicationHandlerBackup

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.handler.TestReplicationHandlerBackup: 
   1) Thread[id=3380, name=Thread-1277, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
at java.net.URL.openStream(URL.java:1037)
at 
org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:313)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.handler.TestReplicationHandlerBackup: 
   1) Thread[id=3380, name=Thread-1277, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
at java.net.URL.openStream(URL.java:1037)
at 
org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:313)
at __randomizedtesting.SeedInfo.seed([CE61A2569A32C2C5]:0)


FAILED:  
org.apache.solr.handler.TestReplicationHandlerBackup.org.apache.solr.handler.TestReplicationHandlerBackup

Error Message:
There are still zombie threads that couldn't be terminated:
   1) Thread[id=3380, name=Thread-1277, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
at java.net.URL.openStream(URL.java:1037)
at 
org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:313)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=3380, name=Thread-1277, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at sun.net.NetworkClient.doConnect(NetworkClient.java:

[jira] [Commented] (LUCENE-5820) SuggestStopFilter should have a factory

2014-09-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134805#comment-14134805
 ] 

ASF subversion and git services commented on LUCENE-5820:
-

Commit 1625197 from [~sar...@syr.edu] in branch 'dev/trunk'
[ https://svn.apache.org/r1625197 ]

LUCENE-5820: license exception + javadocs

> SuggestStopFilter should have a factory
> ---
>
> Key: LUCENE-5820
> URL: https://issues.apache.org/jira/browse/LUCENE-5820
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Varun Thacker
>Assignee: Steve Rowe
>Priority: Minor
> Attachments: LUCENE-5820.patch, LUCENE-5820.patch, LUCENE-5820.patch, 
> LUCENE-5820.patch
>
>
> While trying to use the new Suggester in Solr I realized that 
> SuggestStopFilter did not have a factory. We should add one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5820) SuggestStopFilter should have a factory

2014-09-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134744#comment-14134744
 ] 

ASF subversion and git services commented on LUCENE-5820:
-

Commit 1625194 from [~sar...@syr.edu] in branch 'dev/trunk'
[ https://svn.apache.org/r1625194 ]

LUCENE-5820: eol style

> SuggestStopFilter should have a factory
> ---
>
> Key: LUCENE-5820
> URL: https://issues.apache.org/jira/browse/LUCENE-5820
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Varun Thacker
>Assignee: Steve Rowe
>Priority: Minor
> Attachments: LUCENE-5820.patch, LUCENE-5820.patch, LUCENE-5820.patch, 
> LUCENE-5820.patch
>
>
> While trying to use the new Suggester in Solr I realized that 
> SuggestStopFilter did not have a factory. We should add one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5820) SuggestStopFilter should have a factory

2014-09-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134741#comment-14134741
 ] 

ASF subversion and git services commented on LUCENE-5820:
-

Commit 1625193 from [~sar...@syr.edu] in branch 'dev/trunk'
[ https://svn.apache.org/r1625193 ]

LUCENE-5820: SuggestStopFilter should have a factory

> SuggestStopFilter should have a factory
> ---
>
> Key: LUCENE-5820
> URL: https://issues.apache.org/jira/browse/LUCENE-5820
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Varun Thacker
>Assignee: Steve Rowe
>Priority: Minor
> Attachments: LUCENE-5820.patch, LUCENE-5820.patch, LUCENE-5820.patch, 
> LUCENE-5820.patch
>
>
> While trying to use the new Suggester in Solr I realized that 
> SuggestStopFilter did not have a factory. We should add one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5820) SuggestStopFilter should have a factory

2014-09-15 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134738#comment-14134738
 ] 

Steve Rowe commented on LUCENE-5820:


Looks good, thanks Varun.

Committing shortly.

> SuggestStopFilter should have a factory
> ---
>
> Key: LUCENE-5820
> URL: https://issues.apache.org/jira/browse/LUCENE-5820
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Varun Thacker
>Assignee: Steve Rowe
>Priority: Minor
> Attachments: LUCENE-5820.patch, LUCENE-5820.patch, LUCENE-5820.patch, 
> LUCENE-5820.patch
>
>
> While trying to use the new Suggester in Solr I realized that 
> SuggestStopFilter did not have a factory. We should add one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5955) filter eclipse-build from eclipse file search

2014-09-15 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated LUCENE-5955:
-
Attachment: LUCENE-5955.patch

The dot.project file had some severe whitespace issues.  Normally I would avoid 
fixing problems like that because it can make life difficult for others, but 
this is a file that sees very little churn, so I see it as pretty safe.

I haven't yet figured out how to exclude "search.log" from the search ... this 
file seems to be pretty big on my system, so it takes a long time to search.  I 
tried filtering it in the same place as the eclipse-build directory, but that 
didn't do it.

> filter eclipse-build from eclipse file search
> -
>
> Key: LUCENE-5955
> URL: https://issues.apache.org/jira/browse/LUCENE-5955
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/build
>Affects Versions: 4.10
>Reporter: Shawn Heisey
>Assignee: Shawn Heisey
>Priority: Minor
> Fix For: 4.11, 5.0
>
> Attachments: LUCENE-5955.patch
>
>
> When doing a file search in the eclipse project built with "ant eclipse", the 
> eclipse-build directory gets searched.  This results in about twice as many 
> files being searched, so it takes about twice as long as it should.  I would 
> expect eclipse to automatically skip that directory because it knows that 
> directory is its own build target, but it doesn't seem that eclipse is that 
> smart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5955) filter eclipse-build from eclipse file search

2014-09-15 Thread Shawn Heisey (JIRA)
Shawn Heisey created LUCENE-5955:


 Summary: filter eclipse-build from eclipse file search
 Key: LUCENE-5955
 URL: https://issues.apache.org/jira/browse/LUCENE-5955
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/build
Affects Versions: 4.10
Reporter: Shawn Heisey
Assignee: Shawn Heisey
Priority: Minor
 Fix For: 4.11, 5.0


When doing a file search in the eclipse project built with "ant eclipse", the 
eclipse-build directory gets searched.  This results in about twice as many 
files being searched, so it takes about twice as long as it should.  I would 
expect eclipse to automatically skip that directory because it knows that 
directory is its own build target, but it doesn't seem that eclipse is that 
smart.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5950) Move to Java 8 in trunk

2014-09-15 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134684#comment-14134684
 ] 

Uwe Schindler commented on LUCENE-5950:
---

Patch looks fine to me. I will test it tomorrow with Java 8 on Windows with my 
huge number of JDK installations...

Thanks for already "hacking" around bugs in javac and ecj-compiler (for those 
who wonder why the code changes were needed like removal of diamond operator or 
splitting chained method invocations into two lines).

> Move to Java 8 in trunk
> ---
>
> Key: LUCENE-5950
> URL: https://issues.apache.org/jira/browse/LUCENE-5950
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ryan Ernst
> Attachments: LUCENE-5950.patch, LUCENE-5950.patch, LUCENE-5950.patch, 
> LUCENE-5950.patch, LUCENE-5950.patch
>
>
> The dev list thread "[VOTE] Move trunk to Java 8" passed.
> http://markmail.org/thread/zcddxioz2yvsdqkc
> This issue is to actually move trunk to java 8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5952) Make Version.java lenient again?

2014-09-15 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134667#comment-14134667
 ] 

Uwe Schindler edited comment on LUCENE-5952 at 9/15/14 11:09 PM:
-

bq. Wait, it was lenient before. It was just a string. Unrelated to 
version.java in any way.

Version.java was not lenient at all. In the past Version.java was very strict 
(only allowed the constants). For parsing index version numbers we used another 
comparator, which was lenient and accepted any version number to compare (like 
the Maven version comparator). Now its both in one class and therefore we have 
to relax Version.java more, to be future proof.

Before it was enum, now its a simple class with some additional bounds on major 
version. We just have to remove the major version constraint.

_In my opinion, we should not save index version as string at all and instead 
save the "encoded value" as an (v)int._


was (Author: thetaphi):
bq. Wait, it was lenient before. It was just a string. Unrelated to 
version.java in any way.

Version.java was not lenient at all. In the past Version.java was very strict 
(only allowed the constants). For parsing index version numbers we used another 
comparator, which was lenient and accepted any version number to compare (like 
the Maven version comparator). Now its both in one class and therefore we have 
to relax Version.java more, to be future proof.

Before it was enum, now its a simple class with some additional bounds on major 
version. We just have to remove the major version constraint.

_In my opinion, we should not save index version as string at all and instead 
save the "encoded value" and an (v)int._

> Make Version.java lenient again?
> 
>
> Key: LUCENE-5952
> URL: https://issues.apache.org/jira/browse/LUCENE-5952
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.10
>Reporter: Michael McCandless
>Priority: Blocker
> Fix For: 4.10.1, 4.11, 5.0
>
> Attachments: LUCENE-5952.patch
>
>
> As discussed on the dev list, it's spooky how Version.java tries to fully 
> parse the incoming version string ... and then throw exceptions that lack 
> details about what invalid value it received, which file contained the 
> invalid value, etc.
> It also seems too low level to be checking versions (e.g. is not future proof 
> for when 4.10 is passed a 5.x index by accident), and seems redundant with 
> the codec headers we already have for checking versions?
> Should we just go back to lenient parsing?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5952) Make Version.java lenient again?

2014-09-15 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-5952:
--
Affects Version/s: 4.10

> Make Version.java lenient again?
> 
>
> Key: LUCENE-5952
> URL: https://issues.apache.org/jira/browse/LUCENE-5952
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.10
>Reporter: Michael McCandless
>Priority: Blocker
> Fix For: 4.10.1, 4.11, 5.0
>
> Attachments: LUCENE-5952.patch
>
>
> As discussed on the dev list, it's spooky how Version.java tries to fully 
> parse the incoming version string ... and then throw exceptions that lack 
> details about what invalid value it received, which file contained the 
> invalid value, etc.
> It also seems too low level to be checking versions (e.g. is not future proof 
> for when 4.10 is passed a 5.x index by accident), and seems redundant with 
> the codec headers we already have for checking versions?
> Should we just go back to lenient parsing?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5952) Make Version.java lenient again?

2014-09-15 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134667#comment-14134667
 ] 

Uwe Schindler commented on LUCENE-5952:
---

bq. Wait, it was lenient before. It was just a string. Unrelated to 
version.java in any way.

Version.java was not lenient at all. In the past Version.java was very strict 
(only allowed the constants). For parsing index version numbers we used another 
comparator, which was lenient and accepted any version number to compare (like 
the Maven version comparator). Now its both in one class and therefore we have 
to relax Version.java more, to be future proof.

Before it was enum, now its a simple class with some additional bounds on major 
version. We just have to remove the major version constraint.

_In my opinion, we should not save index version as string at all and instead 
save the "encoded value" and an (v)int._

> Make Version.java lenient again?
> 
>
> Key: LUCENE-5952
> URL: https://issues.apache.org/jira/browse/LUCENE-5952
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Priority: Blocker
> Fix For: 4.10.1, 4.11, 5.0
>
> Attachments: LUCENE-5952.patch
>
>
> As discussed on the dev list, it's spooky how Version.java tries to fully 
> parse the incoming version string ... and then throw exceptions that lack 
> details about what invalid value it received, which file contained the 
> invalid value, etc.
> It also seems too low level to be checking versions (e.g. is not future proof 
> for when 4.10 is passed a 5.x index by accident), and seems redundant with 
> the codec headers we already have for checking versions?
> Should we just go back to lenient parsing?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5952) Make Version.java lenient again?

2014-09-15 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134657#comment-14134657
 ] 

Robert Muir commented on LUCENE-5952:
-

Wait, it was lenient before. It was just a string. Unrelated to version.java in 
any way.

> Make Version.java lenient again?
> 
>
> Key: LUCENE-5952
> URL: https://issues.apache.org/jira/browse/LUCENE-5952
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Priority: Blocker
> Fix For: 4.10.1, 4.11, 5.0
>
> Attachments: LUCENE-5952.patch
>
>
> As discussed on the dev list, it's spooky how Version.java tries to fully 
> parse the incoming version string ... and then throw exceptions that lack 
> details about what invalid value it received, which file contained the 
> invalid value, etc.
> It also seems too low level to be checking versions (e.g. is not future proof 
> for when 4.10 is passed a 5.x index by accident), and seems redundant with 
> the codec headers we already have for checking versions?
> Should we just go back to lenient parsing?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5952) Make Version.java lenient again?

2014-09-15 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134654#comment-14134654
 ] 

Uwe Schindler commented on LUCENE-5952:
---

Just a suggestion: Can we add a fake index with a version number of "6.1.0" to 
see if you correctly get IndexTooNewException and not an IAE? Could be added to 
TestBackwardsCompatibility! :-) Just a ZIP file, but hack the write code 
temporarily to pass a fake string.

> Make Version.java lenient again?
> 
>
> Key: LUCENE-5952
> URL: https://issues.apache.org/jira/browse/LUCENE-5952
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Priority: Blocker
> Fix For: 4.10.1, 4.11, 5.0
>
> Attachments: LUCENE-5952.patch
>
>
> As discussed on the dev list, it's spooky how Version.java tries to fully 
> parse the incoming version string ... and then throw exceptions that lack 
> details about what invalid value it received, which file contained the 
> invalid value, etc.
> It also seems too low level to be checking versions (e.g. is not future proof 
> for when 4.10 is passed a 5.x index by accident), and seems redundant with 
> the codec headers we already have for checking versions?
> Should we just go back to lenient parsing?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5952) Make Version.java lenient again?

2014-09-15 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134649#comment-14134649
 ] 

Uwe Schindler commented on LUCENE-5952:
---

I think we still have to restrict major to be in the valid range, otherwise the 
encodedValue may overflow. So major should be between 0 and 255, right?

Otherwise looks fine!

> Make Version.java lenient again?
> 
>
> Key: LUCENE-5952
> URL: https://issues.apache.org/jira/browse/LUCENE-5952
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Priority: Blocker
> Fix For: 4.10.1, 4.11, 5.0
>
> Attachments: LUCENE-5952.patch
>
>
> As discussed on the dev list, it's spooky how Version.java tries to fully 
> parse the incoming version string ... and then throw exceptions that lack 
> details about what invalid value it received, which file contained the 
> invalid value, etc.
> It also seems too low level to be checking versions (e.g. is not future proof 
> for when 4.10 is passed a 5.x index by accident), and seems redundant with 
> the codec headers we already have for checking versions?
> Should we just go back to lenient parsing?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5950) Move to Java 8 in trunk

2014-09-15 Thread Ryan Ernst (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Ernst updated LUCENE-5950:
---
Attachment: LUCENE-5950.patch

Ok, I addressed the comments, did another search for "7" in ant files, ran 
{{ant documentation-lint}} and {{ant nightly-smoke}}.

> Move to Java 8 in trunk
> ---
>
> Key: LUCENE-5950
> URL: https://issues.apache.org/jira/browse/LUCENE-5950
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ryan Ernst
> Attachments: LUCENE-5950.patch, LUCENE-5950.patch, LUCENE-5950.patch, 
> LUCENE-5950.patch, LUCENE-5950.patch
>
>
> The dev list thread "[VOTE] Move trunk to Java 8" passed.
> http://markmail.org/thread/zcddxioz2yvsdqkc
> This issue is to actually move trunk to java 8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5954) Store lucene version in segment_N

2014-09-15 Thread Ryan Ernst (JIRA)
Ryan Ernst created LUCENE-5954:
--

 Summary: Store lucene version in segment_N
 Key: LUCENE-5954
 URL: https://issues.apache.org/jira/browse/LUCENE-5954
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Ryan Ernst


It would be nice to have the version of lucene that wrote segments_N, so that 
we can use this to determine which major version an index was written with (for 
upgrading across major versions).  I think this could be squeezed in just after 
the segments_N header.  




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5952) Make Version.java lenient again?

2014-09-15 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-5952:
---
Attachment: LUCENE-5952.patch

Patch, applies to 4.10.x.

I improved the error handling in Version.java to more clearly say what
offending value had been passed in, and various places that were
calling Version.parse to say which resource/fileName they had read the
version from and to throw CorruptIndexException.

I also removed the two deprecated SegmentInfo ctors that took String
version and parsed it (we are allowed to just change this API: it's
experimental).

I also fixed Version.java to not pass judgement on the major version,
so we remain future proof.


> Make Version.java lenient again?
> 
>
> Key: LUCENE-5952
> URL: https://issues.apache.org/jira/browse/LUCENE-5952
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Priority: Blocker
> Fix For: 4.10.1, 4.11, 5.0
>
> Attachments: LUCENE-5952.patch
>
>
> As discussed on the dev list, it's spooky how Version.java tries to fully 
> parse the incoming version string ... and then throw exceptions that lack 
> details about what invalid value it received, which file contained the 
> invalid value, etc.
> It also seems too low level to be checking versions (e.g. is not future proof 
> for when 4.10 is passed a 5.x index by accident), and seems redundant with 
> the codec headers we already have for checking versions?
> Should we just go back to lenient parsing?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5953) Make LockFactory final on Directory

2014-09-15 Thread Uwe Schindler (JIRA)
Uwe Schindler created LUCENE-5953:
-

 Summary: Make LockFactory final on Directory
 Key: LUCENE-5953
 URL: https://issues.apache.org/jira/browse/LUCENE-5953
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/store
Reporter: Uwe Schindler
Assignee: Uwe Schindler
 Fix For: 5.0


We should remove the setters for the LockFactory from Directory and make the 
field final. It is a bug to change the LockFactory after creating a directory, 
because you may break locking (if locks are currently held).

The LockFactory should be passed on ctor only.

The other suggestion: Should LockFactory have a directory at all? We moved away 
from having the lock separately from the index directory. This is no longer a 
supported configuration (since approx Lucene 2.9 or 3.0). I would like to 
remove the directory from LockFactory and make it part of the Directory only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6507) various bugs using localparams with stats.field

2014-09-15 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-6507.

   Resolution: Fixed
Fix Version/s: 5.0
   4.11

committed to trunk & backported to 4.x

given the amount of code churn involved here, i'm not really comfortable 
backporting to branch_4_10 ... not until this code gets smoked out a bit more 
(we'll see what happens with the timing of a 4.10.1 release - but for now i'm 
erring on the side of caution)

> various bugs using localparams with stats.field 
> 
>
> Key: SOLR-6507
> URL: https://issues.apache.org/jira/browse/SOLR-6507
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: 4.11, 5.0
>
> Attachments: SOLR-6507.patch, SOLR-6507.patch
>
>
> StatsComponent has two different code paths for dealing with param parsing 
> depending on wether it's a single node request of a distributed request, 
> which results in two very differnet looking bugs (but in my opinion have the 
> same root cause: bogus local param parsing):
> * the documented local params for stats.field don't work on distributed stats 
> requests at all
> * per field "calcdistinct" doesn't work if localparms are used on single node 
> request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6507) various bugs using localparams with stats.field

2014-09-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134567#comment-14134567
 ] 

ASF subversion and git services commented on SOLR-6507:
---

Commit 1625172 from hoss...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1625172 ]

SOLR-6507: Fixed several bugs involving stats.field used with local params 
(merge r1625163)

> various bugs using localparams with stats.field 
> 
>
> Key: SOLR-6507
> URL: https://issues.apache.org/jira/browse/SOLR-6507
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-6507.patch, SOLR-6507.patch
>
>
> StatsComponent has two different code paths for dealing with param parsing 
> depending on wether it's a single node request of a distributed request, 
> which results in two very differnet looking bugs (but in my opinion have the 
> same root cause: bogus local param parsing):
> * the documented local params for stats.field don't work on distributed stats 
> requests at all
> * per field "calcdistinct" doesn't work if localparms are used on single node 
> request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5986) Don't allow runaway queries from harming Solr cluster health or search performance

2014-09-15 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134546#comment-14134546
 ] 

Anshum Gupta commented on SOLR-5986:


Ah ok, that makes sense. Just that I discarded the first request as soon as I 
created it.
The newer patches etc are on the newer request.

 https://reviews.apache.org/r/25658/

> Don't allow runaway queries from harming Solr cluster health or search 
> performance
> --
>
> Key: SOLR-5986
> URL: https://issues.apache.org/jira/browse/SOLR-5986
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: Steve Davids
>Assignee: Anshum Gupta
>Priority: Critical
> Fix For: 4.10
>
> Attachments: SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, 
> SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, 
> SOLR-5986.patch
>
>
> The intent of this ticket is to have all distributed search requests stop 
> wasting CPU cycles on requests that have already timed out or are so 
> complicated that they won't be able to execute. We have come across a case 
> where a nasty wildcard query within a proximity clause was causing the 
> cluster to enumerate terms for hours even though the query timeout was set to 
> minutes. This caused a noticeable slowdown within the system which made us 
> restart the replicas that happened to service that one request, the worst 
> case scenario are users with a relatively low zk timeout value will have 
> nodes start dropping from the cluster due to long GC pauses.
> [~amccurry] Built a mechanism into Apache Blur to help with the issue in 
> BLUR-142 (see commit comment for code, though look at the latest code on the 
> trunk for newer bug fixes).
> Solr should be able to either prevent these problematic queries from running 
> by some heuristic (possibly estimated size of heap usage) or be able to 
> execute a thread interrupt on all query threads once the time threshold is 
> met. This issue mirrors what others have discussed on the mailing list: 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/200903.mbox/%3c856ac15f0903272054q2dbdbd19kea3c5ba9e105b...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6512) Add a collections API call to attach arbitrary properties to a replica

2014-09-15 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134541#comment-14134541
 ] 

Erick Erickson commented on SOLR-6512:
--

So after thinking on it for a bit, I decided I like option 3, with the 3.3 v1 
variant. I'll attach a preliminary patch in a minute that has 
ADD/DELETE[REPLICAROLE] collections API commands. It's certainly not ready for 
committing yet.

Things I need to do yet:
1> Right now, it only really recognizes a "preferredLeader" role. I'll remove 
that restriction.
2> I'll see add support on the ADDREPLICAROLE for "onePerShard" so if people 
want to put arbitrary roles in there they can. It'll default to "true". The 
"preferredLeader" command will barf if onePerShard=false, otherwise it's up to 
someone adding a new role. 

So the train's leaving the station here, object now if there are problems with 
this approach please.

> Add a collections API call to attach arbitrary properties to a replica
> --
>
> Key: SOLR-6512
> URL: https://issues.apache.org/jira/browse/SOLR-6512
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>
> This is a sub-task for SOLR-6491, but seems generally useful. 
> Since this is in support of the "preferredLeader" functionality, I've run 
> into some considerations that I wanted some feedback on how to handle.
> "preferredLeader" has the restriction that there should only be one per 
> slice, so setting this for a particular node means removing the property for 
> all the other replicas on the slice. Not a problem to do, my question is more 
> whether this is something reasonable to enforce on an arbitrary property 
> based on what that property is? Perfectly do-able, but "semantically 
> challenged". Currently, this is never a node with "preferedLeader" set to 
> "false", it is forcibly removed from other nodes in the slice when this 
> property is assigned.
> The problem here is that there's nothing about assigning an arbitrary 
> property to a node that would reasonably imply this kind of behavior. One 
> could always control this with secondary flags on the command, e.g. 
> "shardExclusive=true|false" for instance, perhaps with safety checks in for 
> known one-per-shard properties like "preferredLeader".
> "preferredLeader" seems to fit more naturally into a "role", but currently 
> ADDROLE and DELTEROLE have nothing to do with the notion of setting a role 
> for a particular node relative to a collection/shard. Easy enough to add, but 
> enforcing the "only one node per slice may have this role" rule there is 
> similarly arbitrary and overloads the ADDROLE/DELETEROLE in a way that seems 
> equally confusing. Plus, checking whether the required collection/shard/node 
> params are present becomes based on the value of the property being set, 
> which is all equally arbitrary.
> The other interesting thing is that setting an arbitrary property on a node 
> would allow one to mess things up royally by, say, changing properties like 
> "core", or "base_url" or node_name at will. Actually this is potentially 
> useful, but very, very dangerous and I'm not particularly interested in 
> supporting it ;).  I suppose we could require a prefix, say the only settable 
> properties are "property.whatever".
> We could also add something specific to nodes, something like 
> ADDREPLICAROLE/DELETEREPLICAROLE, perhaps with sub-params like 
> "onlyOneAllowedPerShard", but this gets messy and relies on the users "doing 
> the right thing". I prefer enforcing rules like this  based on the role I 
> think. Or at least enforcing these kinds of requirements on the 
> "preferredLeader" role if we go that way.
> What are people's thoughts here? I think I'm tending towards the 
> ADDREPLICAROLE/DELETEREPLICAROLE way of doing this, but it's not set in 
> stone. I have code locally for arbitrary properties that I can modify for the 
> role bits.
> So, if I'm going to summarize the points I'd like feedback on:
> 1> Is setting arbitrary properties on a node desirable? If so, should we 
> require a prefix like "property" to prevent resetting values SolrCloud 
> depends on?
> 2> Is it better to piggyback on ADDROLE/DELETEROLE? Personally I'm not in 
> favor of this one. Too messy with requiring additional parameters to work 
> right in this case
> 3> Is the best option to create new collections API calls for 
> ADDREPLICAROLE/DELETEREPLICAROLE that
> 3.1> require collection/slice/node parameters
> 3.2> enforces the "onlyOnePerShard" rule for certain known roles
> 3.3 v1> allows users to specify arbitrary roles something like 
> "onlyOnePerShard" as an optional T|F parameter, otherwise is totally open.
> -or-
> 3.3 v2> No support other than "preferredLeader", only roles that are 

[jira] [Commented] (SOLR-6506) SolrCloud route.name=implicit not effect

2014-09-15 Thread Xu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134537#comment-14134537
 ] 

Xu Zhang commented on SOLR-6506:


I'm not sure if this is a bug, collection API works fine for me.

Actually I think the best way to setup collection correctly is by using 
Collection API.



> SolrCloud   route.name=implicit  not effect
> ---
>
> Key: SOLR-6506
> URL: https://issues.apache.org/jira/browse/SOLR-6506
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.9
>Reporter: Zhang Jingpeng
>
>  The first node i start with 
> "java -Xms512m -Xmx1536m -DzkRun -Djetty.port=8983 
> -Dshards=shard1,shard2,shard3 -Droute.name=implicit
> -Dsolr.data.dir=/data/data1 -Dsolr.solr.home=multicore -Dbootstrap_conf=true 
> -DzkHost=localhost:2182 -jar start.jar "
> I want to start up the second node to shard2, but it defaut belong to shard1, 
> the second node command is
> "java -Xms512m -Xmx1536m -Djetty.port=8988 -Dsolr.data.dir=/data/data6 
> -Dsolr.solr.home=multicore -DzkHost=localhost:2182 -jar start.jar"
>  I use  
> "http://ip:port/solr/admin/collections?action=CREATESHARD&shard=shard2&collection=mycol";
>   to create a new shard also does not effect. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6509) Solr start scripts interactive mode doesn't honor -z argument

2014-09-15 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-6509:
-
Attachment: SOLR-6509.patch

The start script now honors the -z argument when launching the interactive 
SolrCloud example, such as:

{code}
bin/solr -e cloud -z localhost:2181
{code}

This will run the interactive example as before, but instead of launching the 
embedded ZooKeeper instance, it will start each node with -z localhost:2181 to 
connect to an external ZK instance.

> Solr start scripts interactive mode doesn't honor -z argument
> -
>
> Key: SOLR-6509
> URL: https://issues.apache.org/jira/browse/SOLR-6509
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 4.10
>Reporter: Shalin Shekhar Mangar
>Assignee: Timothy Potter
> Attachments: SOLR-6509.patch
>
>
> The solr start script ignore -z parameter when combined with -e cloud 
> (interactive cloud mode).
> {code}
> ./bin/solr -z localhost:2181 -e cloud
> Welcome to the SolrCloud example!
> This interactive session will help you launch a SolrCloud cluster on your 
> local workstation.
> To begin, how many Solr nodes would you like to run in your local cluster? 
> (specify 1-4 nodes) [2] 1
> Ok, let's start up 1 Solr nodes for your example SolrCloud cluster.
> Please enter the port for node1 [8983] 
> 8983
> Cloning /home/shalin/programs/solr-4.10.1/example into 
> /home/shalin/programs/solr-4.10.1/node1
> Starting up SolrCloud node1 on port 8983 using command:
> solr start -cloud -d node1 -p 8983 
> Waiting to see Solr listening on port 8983 [-]  
> Started Solr server on port 8983 (pid=27291). Happy searching!
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6509) Solr start scripts interactive mode doesn't honor -z argument

2014-09-15 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter reassigned SOLR-6509:


Assignee: Timothy Potter

> Solr start scripts interactive mode doesn't honor -z argument
> -
>
> Key: SOLR-6509
> URL: https://issues.apache.org/jira/browse/SOLR-6509
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Affects Versions: 4.10
>Reporter: Shalin Shekhar Mangar
>Assignee: Timothy Potter
>
> The solr start script ignore -z parameter when combined with -e cloud 
> (interactive cloud mode).
> {code}
> ./bin/solr -z localhost:2181 -e cloud
> Welcome to the SolrCloud example!
> This interactive session will help you launch a SolrCloud cluster on your 
> local workstation.
> To begin, how many Solr nodes would you like to run in your local cluster? 
> (specify 1-4 nodes) [2] 1
> Ok, let's start up 1 Solr nodes for your example SolrCloud cluster.
> Please enter the port for node1 [8983] 
> 8983
> Cloning /home/shalin/programs/solr-4.10.1/example into 
> /home/shalin/programs/solr-4.10.1/node1
> Starting up SolrCloud node1 on port 8983 using command:
> solr start -cloud -d node1 -p 8983 
> Waiting to see Solr listening on port 8983 [-]  
> Started Solr server on port 8983 (pid=27291). Happy searching!
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5986) Don't allow runaway queries from harming Solr cluster health or search performance

2014-09-15 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134532#comment-14134532
 ] 

Steve Rowe commented on SOLR-5986:
--

[~anshumg], you apparently created two review requests, and [~rcmuir] reviewed 
the first one: https://reviews.apache.org/r/25656/ - you can see his review 
there.

> Don't allow runaway queries from harming Solr cluster health or search 
> performance
> --
>
> Key: SOLR-5986
> URL: https://issues.apache.org/jira/browse/SOLR-5986
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: Steve Davids
>Assignee: Anshum Gupta
>Priority: Critical
> Fix For: 4.10
>
> Attachments: SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, 
> SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, 
> SOLR-5986.patch
>
>
> The intent of this ticket is to have all distributed search requests stop 
> wasting CPU cycles on requests that have already timed out or are so 
> complicated that they won't be able to execute. We have come across a case 
> where a nasty wildcard query within a proximity clause was causing the 
> cluster to enumerate terms for hours even though the query timeout was set to 
> minutes. This caused a noticeable slowdown within the system which made us 
> restart the replicas that happened to service that one request, the worst 
> case scenario are users with a relatively low zk timeout value will have 
> nodes start dropping from the cluster due to long GC pauses.
> [~amccurry] Built a mechanism into Apache Blur to help with the issue in 
> BLUR-142 (see commit comment for code, though look at the latest code on the 
> trunk for newer bug fixes).
> Solr should be able to either prevent these problematic queries from running 
> by some heuristic (possibly estimated size of heap usage) or be able to 
> execute a thread interrupt on all query threads once the time threshold is 
> met. This issue mirrors what others have discussed on the mailing list: 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/200903.mbox/%3c856ac15f0903272054q2dbdbd19kea3c5ba9e105b...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5950) Move to Java 8 in trunk

2014-09-15 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134452#comment-14134452
 ] 

Uwe Schindler edited comment on LUCENE-5950 at 9/15/14 9:17 PM:


There are some changes in common-build that are wrong: The workaround for the 
Ant bug is exactly like that, that we pass "build.compiler=javac1.7" for every 
version >= 1.8, because Ant 1.8.3 and 1.8.4 have a bug, so we must pass 
javac1.7 (see link supplied in comment). So don't change that condition! 
Because we are now on minimum Java 8, you can set build.compiler always to 
javac1.7 as workaround:

{code:xml}
  
  

  
  

  
{code}

Also some conditions/fails are commented out (also in root build.xml regarding 
smoke tester). The "needed minimum Java 8"  was commented out, why? 


was (Author: thetaphi):
There are some changes in common-build that are wrong: The workaround for the 
Ant bug is exactly like that, that we pass "build.compiler=javac1.7" for every 
version >= 1.8, because Ant 1.8.2 and 1.8.3 have a bug, so we must pass 
javac1.7 (see link supplied in comment). So don't change that condition! 
Because we are now on minimum Java 8, you can set build.compiler always to 
javac1.7 as workaround:

{code:xml}
  
  

  
  

  
{code}

Also some conditions/fails are commented out (also in root build.xml regarding 
smoke tester). The "needed minimum Java 8"  was commented out, why? 

> Move to Java 8 in trunk
> ---
>
> Key: LUCENE-5950
> URL: https://issues.apache.org/jira/browse/LUCENE-5950
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ryan Ernst
> Attachments: LUCENE-5950.patch, LUCENE-5950.patch, LUCENE-5950.patch, 
> LUCENE-5950.patch
>
>
> The dev list thread "[VOTE] Move trunk to Java 8" passed.
> http://markmail.org/thread/zcddxioz2yvsdqkc
> This issue is to actually move trunk to java 8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5952) Make Version.java lenient again?

2014-09-15 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134490#comment-14134490
 ] 

Michael McCandless commented on LUCENE-5952:


In fact it wasn't lenient before at all ... it used Version.valueOf (and 
Version was an enum) so that would also hit IAE for any invalid versions (i.e. 
that did not have an exact matching enum constant)...

I think for this issue I'll just try to improve the error reporting when there 
is a problem...

> Make Version.java lenient again?
> 
>
> Key: LUCENE-5952
> URL: https://issues.apache.org/jira/browse/LUCENE-5952
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Priority: Blocker
> Fix For: 4.10.1, 4.11, 5.0
>
>
> As discussed on the dev list, it's spooky how Version.java tries to fully 
> parse the incoming version string ... and then throw exceptions that lack 
> details about what invalid value it received, which file contained the 
> invalid value, etc.
> It also seems too low level to be checking versions (e.g. is not future proof 
> for when 4.10 is passed a 5.x index by accident), and seems redundant with 
> the codec headers we already have for checking versions?
> Should we just go back to lenient parsing?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5986) Don't allow runaway queries from harming Solr cluster health or search performance

2014-09-15 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-5986:
---
Attachment: SOLR-5986.patch

Thanks Robert for the review. Updated the patch with most of your feedback + 
recommendations.

I didn't change the design from using a ThreadLocal to passing the timeout in 
the constructor as the user might pre-construct and reuse the reader/searcher. 
(Solr specifically wouldn't play well with such a change).

I did consider doing it that way but wasn't able figure out a way to do that. 
I'll just spend some more time to see if there's a clean way to do it.

About the test relying on system clock, not really that much as the sleep in 
wrapped up .next() and the timeout values are not even close.

Also, I can't figure out how to view the review on the review board (doesn't 
show up for me and the mail went to spam until Steve mentioned about the 
email). I've posted the updated patch there to make it easier for everyone to 
look at it.

> Don't allow runaway queries from harming Solr cluster health or search 
> performance
> --
>
> Key: SOLR-5986
> URL: https://issues.apache.org/jira/browse/SOLR-5986
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: Steve Davids
>Assignee: Anshum Gupta
>Priority: Critical
> Fix For: 4.10
>
> Attachments: SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, 
> SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, 
> SOLR-5986.patch
>
>
> The intent of this ticket is to have all distributed search requests stop 
> wasting CPU cycles on requests that have already timed out or are so 
> complicated that they won't be able to execute. We have come across a case 
> where a nasty wildcard query within a proximity clause was causing the 
> cluster to enumerate terms for hours even though the query timeout was set to 
> minutes. This caused a noticeable slowdown within the system which made us 
> restart the replicas that happened to service that one request, the worst 
> case scenario are users with a relatively low zk timeout value will have 
> nodes start dropping from the cluster due to long GC pauses.
> [~amccurry] Built a mechanism into Apache Blur to help with the issue in 
> BLUR-142 (see commit comment for code, though look at the latest code on the 
> trunk for newer bug fixes).
> Solr should be able to either prevent these problematic queries from running 
> by some heuristic (possibly estimated size of heap usage) or be able to 
> execute a thread interrupt on all query threads once the time threshold is 
> met. This issue mirrors what others have discussed on the mailing list: 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/200903.mbox/%3c856ac15f0903272054q2dbdbd19kea3c5ba9e105b...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6485) ReplicationHandler should have an option to throttle the speed of replication

2014-09-15 Thread Ramkumar Aiyengar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134482#comment-14134482
 ] 

Ramkumar Aiyengar commented on SOLR-6485:
-

Looks about right. I haven't checked fully, but shouldn't the reserve and 
release in finally be swapped? Otherwise you have a race..

> ReplicationHandler should have an option to throttle the speed of replication
> -
>
> Key: SOLR-6485
> URL: https://issues.apache.org/jira/browse/SOLR-6485
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>Assignee: Noble Paul
> Attachments: SOLR-6485.patch, SOLR-6485.patch, SOLR-6485.patch, 
> SOLR-6485.patch, SOLR-6485.patch
>
>
> The ReplicationHandler should have an option to throttle the speed of 
> replication.
> It is useful for people who want bring up nodes in their SolrCloud cluster or 
> when have a backup-restore API and not eat up all their network bandwidth 
> while replicating.
> I am writing a test case and will attach a patch shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Review Request 25658: Timeout queries when they take too long to rewrite/enumerate over terms.

2014-09-15 Thread Anshum Gupta

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/25658/
---

(Updated Sept. 15, 2014, 9:02 p.m.)


Review request for lucene.


Changes
---

Updated the patch after integrating most of Roberts' recommended changes.


Bugs: SOLR-5986
https://issues.apache.org/jira/browse/SOLR-5986


Repository: lucene


Description
---

Timeout queries when they take too long to rewrite/enumerate over terms.


Diffs (updated)
-

  
trunk/lucene/core/src/java/org/apache/lucene/index/ExitableDirectoryReader.java 
PRE-CREATION 
  trunk/lucene/core/src/java/org/apache/lucene/index/QueryTimeout.java 
PRE-CREATION 
  
trunk/lucene/core/src/test/org/apache/lucene/index/TestExitableDirectoryReader.java
 PRE-CREATION 
  trunk/solr/core/src/java/org/apache/solr/handler/MoreLikeThisHandler.java 
1625118 
  trunk/solr/core/src/java/org/apache/solr/handler/component/SearchHandler.java 
1625118 
  trunk/solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java 
1625118 
  
trunk/solr/core/src/test/org/apache/solr/core/ExitableDirectoryReaderTest.java 
PRE-CREATION 

Diff: https://reviews.apache.org/r/25658/diff/


Testing
---

Added Lucene/Solr tests. Tested a bit manually.


Thanks,

Anshum Gupta



[jira] [Commented] (SOLR-6507) various bugs using localparams with stats.field

2014-09-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134474#comment-14134474
 ] 

ASF subversion and git services commented on SOLR-6507:
---

Commit 1625163 from hoss...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1625163 ]

SOLR-6507: Fixed several bugs involving stats.field used with local params

> various bugs using localparams with stats.field 
> 
>
> Key: SOLR-6507
> URL: https://issues.apache.org/jira/browse/SOLR-6507
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-6507.patch, SOLR-6507.patch
>
>
> StatsComponent has two different code paths for dealing with param parsing 
> depending on wether it's a single node request of a distributed request, 
> which results in two very differnet looking bugs (but in my opinion have the 
> same root cause: bogus local param parsing):
> * the documented local params for stats.field don't work on distributed stats 
> requests at all
> * per field "calcdistinct" doesn't work if localparms are used on single node 
> request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5950) Move to Java 8 in trunk

2014-09-15 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134469#comment-14134469
 ] 

Uwe Schindler commented on LUCENE-5950:
---

{noformat}
+Oracle Java 8 or OpenJDK 8, be sure to not use the GA build 147 or
{noformat}

GA build 147 was the broken Java 7 initial release. So the whole sentence can 
go away. We may need to change it to make sure that 8u20 cannot be used to 
compile.

> Move to Java 8 in trunk
> ---
>
> Key: LUCENE-5950
> URL: https://issues.apache.org/jira/browse/LUCENE-5950
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ryan Ernst
> Attachments: LUCENE-5950.patch, LUCENE-5950.patch, LUCENE-5950.patch, 
> LUCENE-5950.patch
>
>
> The dev list thread "[VOTE] Move trunk to Java 8" passed.
> http://markmail.org/thread/zcddxioz2yvsdqkc
> This issue is to actually move trunk to java 8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5950) Move to Java 8 in trunk

2014-09-15 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134466#comment-14134466
 ] 

Uwe Schindler commented on LUCENE-5950:
---

In nightly-smoke the condition that checks for Java 8 is still "1.7".

> Move to Java 8 in trunk
> ---
>
> Key: LUCENE-5950
> URL: https://issues.apache.org/jira/browse/LUCENE-5950
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ryan Ernst
> Attachments: LUCENE-5950.patch, LUCENE-5950.patch, LUCENE-5950.patch, 
> LUCENE-5950.patch
>
>
> The dev list thread "[VOTE] Move trunk to Java 8" passed.
> http://markmail.org/thread/zcddxioz2yvsdqkc
> This issue is to actually move trunk to java 8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5950) Move to Java 8 in trunk

2014-09-15 Thread Ryan Ernst (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Ernst updated LUCENE-5950:
---
Attachment: LUCENE-5950.patch

Ok, I changed build.compiler back to {{javac1.7}}.  And the commented out min 
version was unintentional.  Both are fixed with this new patch.

> Move to Java 8 in trunk
> ---
>
> Key: LUCENE-5950
> URL: https://issues.apache.org/jira/browse/LUCENE-5950
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ryan Ernst
> Attachments: LUCENE-5950.patch, LUCENE-5950.patch, LUCENE-5950.patch, 
> LUCENE-5950.patch
>
>
> The dev list thread "[VOTE] Move trunk to Java 8" passed.
> http://markmail.org/thread/zcddxioz2yvsdqkc
> This issue is to actually move trunk to java 8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5950) Move to Java 8 in trunk

2014-09-15 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134464#comment-14134464
 ] 

Uwe Schindler commented on LUCENE-5950:
---

Please go through the build.xml changes a second time and check all them. A 
simple search/replace in build.xml is likely to break.

> Move to Java 8 in trunk
> ---
>
> Key: LUCENE-5950
> URL: https://issues.apache.org/jira/browse/LUCENE-5950
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ryan Ernst
> Attachments: LUCENE-5950.patch, LUCENE-5950.patch, LUCENE-5950.patch, 
> LUCENE-5950.patch
>
>
> The dev list thread "[VOTE] Move trunk to Java 8" passed.
> http://markmail.org/thread/zcddxioz2yvsdqkc
> This issue is to actually move trunk to java 8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5950) Move to Java 8 in trunk

2014-09-15 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134462#comment-14134462
 ] 

Uwe Schindler commented on LUCENE-5950:
---

This condition is also wrong:

{code:xml}
   
 
-  
+  
 
   
{code}

It can stay as is, but ideally needs to be converted to no longer be 
conditionally. With that above condition, the javadoc's won't build:
If the Java version is Java 8+, we must pass "-Xdoclint:none", so correct is 
and unconditional: 

{code:xml}

{code}

> Move to Java 8 in trunk
> ---
>
> Key: LUCENE-5950
> URL: https://issues.apache.org/jira/browse/LUCENE-5950
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ryan Ernst
> Attachments: LUCENE-5950.patch, LUCENE-5950.patch, LUCENE-5950.patch
>
>
> The dev list thread "[VOTE] Move trunk to Java 8" passed.
> http://markmail.org/thread/zcddxioz2yvsdqkc
> This issue is to actually move trunk to java 8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5950) Move to Java 8 in trunk

2014-09-15 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134452#comment-14134452
 ] 

Uwe Schindler commented on LUCENE-5950:
---

There are some changes in common-build that are wrong: The workaround for the 
Ant bug is exactly like that, that we pass "build.compiler=javac1.7" for every 
version >= 1.8, because Ant 1.8.2 and 1.8.3 have a bug, so we must pass 
javac1.7 (see link supplied in comment). So don't change that condition! 
Because we are now on minimum Java 8, you can set build.compiler always to 
javac1.7 as workaround:

{code:xml}
  
  

  
  

  
{code}

Also some conditions/fails are commented out (also in root build.xml regarding 
smoke tester). The "needed minimum Java 8"  was commented out, why? 

> Move to Java 8 in trunk
> ---
>
> Key: LUCENE-5950
> URL: https://issues.apache.org/jira/browse/LUCENE-5950
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ryan Ernst
> Attachments: LUCENE-5950.patch, LUCENE-5950.patch, LUCENE-5950.patch
>
>
> The dev list thread "[VOTE] Move trunk to Java 8" passed.
> http://markmail.org/thread/zcddxioz2yvsdqkc
> This issue is to actually move trunk to java 8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6521) CloudSolrServer should synchronize cache cluster state loading

2014-09-15 Thread Jessica Cheng Mallet (JIRA)
Jessica Cheng Mallet created SOLR-6521:
--

 Summary: CloudSolrServer should synchronize cache cluster state 
loading
 Key: SOLR-6521
 URL: https://issues.apache.org/jira/browse/SOLR-6521
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Jessica Cheng Mallet


Under heavy load-testing with the new solrj client that caches the cluster 
state instead of setting a watcher, I started seeing lots of zk connection loss 
on the client-side when refreshing the CloudSolrServer collectionStateCache, 
and this was causing crazy client-side 99.9% latency (~15 sec). I swapped the 
cache out with guava's LoadingCache (which does locking to ensure only one 
thread loads the content under one key while the other threads that want the 
same key wait) and the connection loss went away and the 99.9% latency also 
went down to just about 1 sec.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5473) Split clusterstate.json per collection and watch states selectively

2014-09-15 Thread Jessica Cheng Mallet (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134449#comment-14134449
 ] 

Jessica Cheng Mallet commented on SOLR-5473:


[~noble.paul] Here: https://issues.apache.org/jira/browse/SOLR-6521. Thanks!

> Split clusterstate.json per collection and watch states selectively 
> 
>
> Key: SOLR-5473
> URL: https://issues.apache.org/jira/browse/SOLR-5473
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: SolrCloud
> Fix For: 4.10, 5.0
>
> Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
> SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74_POC.patch, 
> SOLR-5473-configname-fix.patch, SOLR-5473.patch, SOLR-5473.patch, 
> SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
> SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
> SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
> SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
> SOLR-5473.patch, SOLR-5473.patch, SOLR-5473_no_ui.patch, 
> SOLR-5473_undo.patch, ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log
>
>
> As defined in the parent issue, store the states of each collection under 
> /collections/collectionname/state.json node and watches state changes 
> selectively.
> https://reviews.apache.org/r/24220/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5952) Make Version.java lenient again?

2014-09-15 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134438#comment-14134438
 ] 

Ryan Ernst commented on LUCENE-5952:


The reason I added this check was to satisfy an existing test that parse would 
fail if passed a version like 1.0.  So it was not completely lenient before, 
but I have no problem with allowing Version.java to take any version, and 
letting the code consuming version to decide whether the version is acceptable.

> Make Version.java lenient again?
> 
>
> Key: LUCENE-5952
> URL: https://issues.apache.org/jira/browse/LUCENE-5952
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Priority: Blocker
> Fix For: 4.10.1, 4.11, 5.0
>
>
> As discussed on the dev list, it's spooky how Version.java tries to fully 
> parse the incoming version string ... and then throw exceptions that lack 
> details about what invalid value it received, which file contained the 
> invalid value, etc.
> It also seems too low level to be checking versions (e.g. is not future proof 
> for when 4.10 is passed a 5.x index by accident), and seems redundant with 
> the codec headers we already have for checking versions?
> Should we just go back to lenient parsing?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5950) Move to Java 8 in trunk

2014-09-15 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134436#comment-14134436
 ] 

Ryan Ernst commented on LUCENE-5950:


Note the patch has a stupid mistake on the changes entry.  I have it fixed 
locally now..

> Move to Java 8 in trunk
> ---
>
> Key: LUCENE-5950
> URL: https://issues.apache.org/jira/browse/LUCENE-5950
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ryan Ernst
> Attachments: LUCENE-5950.patch, LUCENE-5950.patch, LUCENE-5950.patch
>
>
> The dev list thread "[VOTE] Move trunk to Java 8" passed.
> http://markmail.org/thread/zcddxioz2yvsdqkc
> This issue is to actually move trunk to java 8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5950) Move to Java 8 in trunk

2014-09-15 Thread Ryan Ernst (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Ernst updated LUCENE-5950:
---
Attachment: LUCENE-5950.patch

One more patch, that is now up to date with trunk (CachingDirectoryFactoryTest 
appears to succeed now).

> Move to Java 8 in trunk
> ---
>
> Key: LUCENE-5950
> URL: https://issues.apache.org/jira/browse/LUCENE-5950
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ryan Ernst
> Attachments: LUCENE-5950.patch, LUCENE-5950.patch, LUCENE-5950.patch
>
>
> The dev list thread "[VOTE] Move trunk to Java 8" passed.
> http://markmail.org/thread/zcddxioz2yvsdqkc
> This issue is to actually move trunk to java 8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6249) Schema API changes return success before all cores are updated

2014-09-15 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-6249:
-
Attachment: SOLR-6249.patch

Updated patch - ignore the previous one

Test coverage for this feature integrated into the 
TestCloudManagedSchemaConcurrent class.

> Schema API changes return success before all cores are updated
> --
>
> Key: SOLR-6249
> URL: https://issues.apache.org/jira/browse/SOLR-6249
> Project: Solr
>  Issue Type: Improvement
>  Components: Schema and Analysis, SolrCloud
>Reporter: Gregory Chanan
>Assignee: Timothy Potter
> Attachments: SOLR-6249.patch, SOLR-6249.patch
>
>
> See SOLR-6137 for more details.
> The basic issue is that Schema API changes return success when the first core 
> is updated, but other cores asynchronously read the updated schema from 
> ZooKeeper.
> So a client application could make a Schema API change and then index some 
> documents based on the new schema that may fail on other nodes.
> Possible fixes:
> 1) Make the Schema API calls synchronous
> 2) Give the client some ability to track the state of the schema.  They can 
> already do this to a certain extent by checking the Schema API on all the 
> replicas and verifying that the field has been added, though this is pretty 
> cumbersome.  Maybe it makes more sense to do this sort of thing on the 
> collection level, i.e. Schema API changes return the zk version to the 
> client.  We add an API to return the current zk version.  On a replica, if 
> the zk version is >= the version the client has, the client knows that 
> replica has at least seen the schema change.  We could also provide an API to 
> do the distribution and checking across the different replicas of the 
> collection so that clients don't need ot do that themselves.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5950) Move to Java 8 in trunk

2014-09-15 Thread Ryan Ernst (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Ernst updated LUCENE-5950:
---
Attachment: LUCENE-5950.patch

New patch which avoids the javac issue (see {{SpanMultiTermQueryWrapper}}).  
Lucene tests pass, solr tests have security manager failures in 
CachingDirectoryFactoryTest.  I also left the {{JRE_IS_MINIMUM_JAVA8}} constant 
because the morphline tests have an assumeFalse on this...which means the tests 
simply should not be run on trunk? It was unclear if the issue had been 
addressed (looks like it was from almost a year ago, months before the GA 
release). 

> Move to Java 8 in trunk
> ---
>
> Key: LUCENE-5950
> URL: https://issues.apache.org/jira/browse/LUCENE-5950
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ryan Ernst
> Attachments: LUCENE-5950.patch, LUCENE-5950.patch
>
>
> The dev list thread "[VOTE] Move trunk to Java 8" passed.
> http://markmail.org/thread/zcddxioz2yvsdqkc
> This issue is to actually move trunk to java 8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5952) Make Version.java lenient again?

2014-09-15 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-5952:
--

 Summary: Make Version.java lenient again?
 Key: LUCENE-5952
 URL: https://issues.apache.org/jira/browse/LUCENE-5952
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Priority: Blocker
 Fix For: 4.10.1, 4.11, 5.0


As discussed on the dev list, it's spooky how Version.java tries to fully parse 
the incoming version string ... and then throw exceptions that lack details 
about what invalid value it received, which file contained the invalid value, 
etc.

It also seems too low level to be checking versions (e.g. is not future proof 
for when 4.10 is passed a 5.x index by accident), and seems redundant with the 
codec headers we already have for checking versions?

Should we just go back to lenient parsing?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6486) solr start script can have a debug flag option

2014-09-15 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-6486:
-
Attachment: SOLR-6486.patch

Went with -a (for additional options); example:

{code}
bin/solr start -a 
"-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=18983"
{code}

In most cases, you'll need to wrap the value for -a in double-quotes so I've 
included that as a tip in the usage.

This approach allows the user to pass additional parameters to the JVM when 
starting Solr, such as Noble's example of setting Java debug args.

Patch tested on Mac and Windows Server 2012

> solr start script can have a debug flag option
> --
>
> Key: SOLR-6486
> URL: https://issues.apache.org/jira/browse/SOLR-6486
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools
>Reporter: Noble Paul
>Assignee: Timothy Potter
> Attachments: SOLR-6486.patch
>
>
> normally I would add this line to my java -jar start.jar
> -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=18983
> the script should take the whole string or assume the debug port to be 
> solrPort+1 (if all I pass is debug=true) or if 
> debug="-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=18983) 
> use it verbatim



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6297) Distributed spellcheck with WordBreakSpellchecker can lose suggestions

2014-09-15 Thread Steve Molloy (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134412#comment-14134412
 ] 

Steve Molloy commented on SOLR-6297:


Sorry for not replying sooner, but yes, I applied the patch to our codebase and 
it seems to fix the issue. Thanks.

> Distributed spellcheck with WordBreakSpellchecker can lose suggestions
> --
>
> Key: SOLR-6297
> URL: https://issues.apache.org/jira/browse/SOLR-6297
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.9
>Reporter: Steve Molloy
>Assignee: James Dyer
> Fix For: 4.11
>
> Attachments: SOLR-6297.patch, SOLR-6297.patch, SOLR-6297.patch, 
> SOLR-6297.patch
>
>
> When performing a spellcheck request in distributed environment with the 
> WordBreakSpellChecker configured, the shard response merging logic can lose 
> some suggestions. Basically, the merging logic ensures that all shards marked 
> the query as not being correctly spelled, which is good, but also expects all 
> shards to return some suggestions, which isn't necessarily the case. So if 
> shard 1 returns 10 suggestions but shard 2 returns none, the final result 
> will contain no suggestions because the term has suggestions from only 1 of 2 
> shards.
> This isn't the case with the DirectSolrSpellChecker which works properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-MacOSX (64bit/jdk1.8.0) - Build # 1795 - Failure!

2014-09-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/1795/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
REGRESSION:  org.apache.lucene.index.TestIndexWriterMerging.testNoWaitClose

Error Message:
file "_9q.fdx" was already written to

Stack Trace:
java.io.IOException: file "_9q.fdx" was already written to
at 
__randomizedtesting.SeedInfo.seed([6907C35743844F49:4EF6628690929FBD]:0)
at 
org.apache.lucene.store.MockDirectoryWrapper.createOutput(MockDirectoryWrapper.java:561)
at 
org.apache.lucene.store.TrackingDirectoryWrapper.createOutput(TrackingDirectoryWrapper.java:44)
at 
org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.(CompressingStoredFieldsWriter.java:113)
at 
org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsWriter(CompressingStoredFieldsFormat.java:120)
at 
org.apache.lucene.index.DefaultIndexingChain.initStoredFieldsWriter(DefaultIndexingChain.java:84)
at 
org.apache.lucene.index.DefaultIndexingChain.startStoredFields(DefaultIndexingChain.java:254)
at 
org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:298)
at 
org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:242)
at 
org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:454)
at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1542)
at 
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1257)
at 
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1238)
at 
org.apache.lucene.index.TestIndexWriterMerging.testNoWaitClose(TestIndexWriterMerging.java:388)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedt

[jira] [Updated] (SOLR-5986) Don't allow runaway queries from harming Solr cluster health or search performance

2014-09-15 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-5986:
---
Attachment: SOLR-5986.patch

Changed some java doc to make it easier for people to understand things.

Perhaps a better way to review? https://reviews.apache.org/r/25658/

P.S: Not sure how it should be done here but I uploaded the patch @ review 
board and pasted the link for the same here. Anyone here knows of a way to 
integrate the 2 so that I wouldn't have to upload at both places and paste the 
link?

> Don't allow runaway queries from harming Solr cluster health or search 
> performance
> --
>
> Key: SOLR-5986
> URL: https://issues.apache.org/jira/browse/SOLR-5986
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: Steve Davids
>Assignee: Anshum Gupta
>Priority: Critical
> Fix For: 4.10
>
> Attachments: SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, 
> SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch
>
>
> The intent of this ticket is to have all distributed search requests stop 
> wasting CPU cycles on requests that have already timed out or are so 
> complicated that they won't be able to execute. We have come across a case 
> where a nasty wildcard query within a proximity clause was causing the 
> cluster to enumerate terms for hours even though the query timeout was set to 
> minutes. This caused a noticeable slowdown within the system which made us 
> restart the replicas that happened to service that one request, the worst 
> case scenario are users with a relatively low zk timeout value will have 
> nodes start dropping from the cluster due to long GC pauses.
> [~amccurry] Built a mechanism into Apache Blur to help with the issue in 
> BLUR-142 (see commit comment for code, though look at the latest code on the 
> trunk for newer bug fixes).
> Solr should be able to either prevent these problematic queries from running 
> by some heuristic (possibly estimated size of heap usage) or be able to 
> execute a thread interrupt on all query threads once the time threshold is 
> met. This issue mirrors what others have discussed on the mailing list: 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/200903.mbox/%3c856ac15f0903272054q2dbdbd19kea3c5ba9e105b...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Review Request 25658: Timeout queries when they take too long to rewrite/enumerate over terms.

2014-09-15 Thread Anshum Gupta

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/25658/
---

Review request for lucene.


Bugs: SOLR-5986
https://issues.apache.org/jira/browse/SOLR-5986


Repository: lucene


Description
---

Timeout queries when they take too long to rewrite/enumerate over terms.


Diffs
-

  
trunk/lucene/core/src/java/org/apache/lucene/index/ExitableDirectoryReader.java 
PRE-CREATION 
  trunk/lucene/core/src/java/org/apache/lucene/index/QueryTimeout.java 
PRE-CREATION 
  
trunk/lucene/core/src/test/org/apache/lucene/index/TestExitableDirectoryReader.java
 PRE-CREATION 
  trunk/solr/core/src/java/org/apache/solr/handler/MoreLikeThisHandler.java 
1625118 
  trunk/solr/core/src/java/org/apache/solr/handler/component/SearchHandler.java 
1625118 
  trunk/solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java 
1625118 
  
trunk/solr/core/src/test/org/apache/solr/core/ExitableDirectoryReaderTest.java 
PRE-CREATION 

Diff: https://reviews.apache.org/r/25658/diff/


Testing
---

Added Lucene/Solr tests. Tested a bit manually.


Thanks,

Anshum Gupta



[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1209: POMs out of sync

2014-09-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1209/

2 tests failed.
REGRESSION:  
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.testDistribSearch

Error Message:
No live SolrServers available to handle this request:[http://127.0.0.1:43013, 
http://127.0.0.1:43008, http://127.0.0.1:43016]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:43013, http://127.0.0.1:43008, 
http://127.0.0.1:43016]
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:550)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.doRequest(LBHttpSolrServer.java:343)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:304)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.sendRequest(CloudSolrServer.java:874)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:658)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:601)
at 
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.removeAndWaitForLastReplicaGone(DeleteLastCustomShardedReplicaTest.java:116)
at 
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.doTest(DeleteLastCustomShardedReplicaTest.java:106)


REGRESSION:  
org.apache.solr.client.solrj.SolrExampleBinaryTest.testChildDoctransformer

Error Message:
Expected mime type application/octet-stream but got text/html. 


Error 500 Server Error


HTTP ERROR: 500
Problem accessing /solr/collection1/select. Reason:
Server Error
Powered by Jetty://
























Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Expected 
mime type application/octet-stream but got text/html. 


Error 500 Server Error


HTTP ERROR: 500
Problem accessing /solr/collection1/select. Reason:
Server Error
Powered by Jetty://























at 
__randomizedtesting.SeedInfo.seed([254A9A14835AD727:5690858E0F42A021]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:512)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
at 
org.apache.solr.client.solrj.SolrExampleTests.testChildDoctransformer(SolrExampleTests.java:1373)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(D

Re: Review Request 25656: Timeout queries during rewrite/expansion based on timeAllowed parameter

2014-09-15 Thread Robert Muir

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/25656/#review53390
---



trunk/lucene/core/src/java/org/apache/lucene/index/ExitableDirectoryReader.java


What is the purpose of this? Lucene is an API, it doesnt do logging. it 
also seems unusued.



trunk/lucene/core/src/java/org/apache/lucene/index/ExitableDirectoryReader.java


Someone can just use FilterAtomicReader.unwrap() ?



trunk/lucene/core/src/java/org/apache/lucene/index/ExitableDirectoryReader.java


All of these methods that return in.XXX are unnecessary. FilterAtomicReader 
does that already.



trunk/lucene/core/src/java/org/apache/lucene/index/ExitableDirectoryReader.java


This should extend FilterFields. The _ notation is strange.



trunk/lucene/core/src/java/org/apache/lucene/index/ExitableDirectoryReader.java


This should extend FilterTerms



trunk/lucene/core/src/java/org/apache/lucene/index/ExitableDirectoryReader.java


This should extend FilterTermsEnum



trunk/lucene/core/src/java/org/apache/lucene/index/ExitableDirectoryReader.java


I don't agree with the 'global static threadlocal QueryTimeOut' class, why 
use this approach? This thing can just take an instance instead.



trunk/lucene/core/src/java/org/apache/lucene/index/QueryTimeout.java


There is no way to close this threadlocal (except gc), and i dont think the 
timeout processing should be done with threadlocal anyway. Instead why cant the 
reader just take a timeout object? FilterReaders are cheap, you could even make 
one for each query to keep things simple and contained.



trunk/lucene/core/src/test/org/apache/lucene/index/TestExitableDirectoryReader.java


It looks like this test depends on wall-clock time? Can it be mocked 
instead? I dont want the false failures because the test "ran too fast"


- Robert Muir


On Sept. 15, 2014, 7:45 p.m., Anshum Gupta wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/25656/
> ---
> 
> (Updated Sept. 15, 2014, 7:45 p.m.)
> 
> 
> Review request for lucene.
> 
> 
> Bugs: SOLR-5986
> https://issues.apache.org/jira/browse/SOLR-5986
> 
> 
> Repository: lucene
> 
> 
> Description
> ---
> 
> Latest patch
> 
> 
> Diffs
> -
> 
>   
> trunk/lucene/core/src/java/org/apache/lucene/index/ExitableDirectoryReader.java
>  PRE-CREATION 
>   trunk/lucene/core/src/java/org/apache/lucene/index/QueryTimeout.java 
> PRE-CREATION 
>   
> trunk/lucene/core/src/test/org/apache/lucene/index/TestExitableDirectoryReader.java
>  PRE-CREATION 
>   trunk/solr/core/src/java/org/apache/solr/handler/MoreLikeThisHandler.java 
> 1625118 
>   
> trunk/solr/core/src/java/org/apache/solr/handler/component/SearchHandler.java 
> 1625118 
>   trunk/solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java 
> 1625118 
>   
> trunk/solr/core/src/test/org/apache/solr/core/ExitableDirectoryReaderTest.java
>  PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/25656/diff/
> 
> 
> Testing
> ---
> 
> Manually tested and tests for lucene as well as Solr.
> 
> 
> Thanks,
> 
> Anshum Gupta
> 
>



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 1833 - Still Failing!

2014-09-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1833/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([9D91BDE760C23FDB:1C7733FF179D5FE7]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:706)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:601)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124)
at 
org.apache.solr.client.solrj.SolrServer.deleteByQuery(SolrServer.java:285)
at 
org.apache.solr.client.solrj.SolrServer.deleteByQuery(SolrServer.java:271)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.del(AbstractFullDistribZkTestBase.java:728)
at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.tryDelete(ChaosMonkeySafeLeaderTest.java:194)
at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.doTest(ChaosMonkeySafeLeaderTest.java:112)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.u

Review Request 25656: Timeout queries during rewrite/expansion based on timeAllowed parameter

2014-09-15 Thread Anshum Gupta

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/25656/
---

Review request for lucene.


Bugs: SOLR-5986
https://issues.apache.org/jira/browse/SOLR-5986


Repository: lucene


Description
---

Latest patch


Diffs
-

  
trunk/lucene/core/src/java/org/apache/lucene/index/ExitableDirectoryReader.java 
PRE-CREATION 
  trunk/lucene/core/src/java/org/apache/lucene/index/QueryTimeout.java 
PRE-CREATION 
  
trunk/lucene/core/src/test/org/apache/lucene/index/TestExitableDirectoryReader.java
 PRE-CREATION 
  trunk/solr/core/src/java/org/apache/solr/handler/MoreLikeThisHandler.java 
1625118 
  trunk/solr/core/src/java/org/apache/solr/handler/component/SearchHandler.java 
1625118 
  trunk/solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java 
1625118 
  
trunk/solr/core/src/test/org/apache/solr/core/ExitableDirectoryReaderTest.java 
PRE-CREATION 

Diff: https://reviews.apache.org/r/25656/diff/


Testing
---

Manually tested and tests for lucene as well as Solr.


Thanks,

Anshum Gupta



[jira] [Updated] (SOLR-5986) Don't allow runaway queries from harming Solr cluster health or search performance

2014-09-15 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-5986:
---
Attachment: SOLR-5986.patch

Added the check for Thread.interrupted(). I wasn't sure how it'd work but I 
tested some random code (non-lucene/solr) and seems like it makes sense to add 
that there.
Thanks [~cariensrs].

> Don't allow runaway queries from harming Solr cluster health or search 
> performance
> --
>
> Key: SOLR-5986
> URL: https://issues.apache.org/jira/browse/SOLR-5986
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: Steve Davids
>Assignee: Anshum Gupta
>Priority: Critical
> Fix For: 4.10
>
> Attachments: SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, 
> SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch
>
>
> The intent of this ticket is to have all distributed search requests stop 
> wasting CPU cycles on requests that have already timed out or are so 
> complicated that they won't be able to execute. We have come across a case 
> where a nasty wildcard query within a proximity clause was causing the 
> cluster to enumerate terms for hours even though the query timeout was set to 
> minutes. This caused a noticeable slowdown within the system which made us 
> restart the replicas that happened to service that one request, the worst 
> case scenario are users with a relatively low zk timeout value will have 
> nodes start dropping from the cluster due to long GC pauses.
> [~amccurry] Built a mechanism into Apache Blur to help with the issue in 
> BLUR-142 (see commit comment for code, though look at the latest code on the 
> trunk for newer bug fixes).
> Solr should be able to either prevent these problematic queries from running 
> by some heuristic (possibly estimated size of heap usage) or be able to 
> execute a thread interrupt on all query threads once the time threshold is 
> met. This issue mirrors what others have discussed on the mailing list: 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/200903.mbox/%3c856ac15f0903272054q2dbdbd19kea3c5ba9e105b...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_67) - Build # 11253 - Still Failing!

2014-09-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11253/
Java: 32bit/jdk1.7.0_67 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.update.processor.DefaultValueUpdateProcessorTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([5E42201370E5C7CB]:0)




Build Log:
[...truncated 12266 lines...]
   [junit4] Suite: 
org.apache.solr.update.processor.DefaultValueUpdateProcessorTest
   [junit4]   2> Creating dataDir: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.update.processor.DefaultValueUpdateProcessorTest-5E42201370E5C7CB-001/init-core-data-001
   [junit4]   2> 1498047 T4277 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
(false) and clientAuth (false)
   [junit4]   2> 1498047 T4277 oas.SolrTestCaseJ4.initCore initCore
   [junit4]   2> 1498047 T4277 oasc.SolrResourceLoader. new 
SolrResourceLoader for directory: 
'/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/src/test-files/solr/collection1/'
   [junit4]   2> 1498048 T4277 oasc.SolrResourceLoader.replaceClassLoader 
Adding 
'file:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/src/test-files/solr/collection1/lib/.svn/'
 to classloader
   [junit4]   2> 1498048 T4277 oasc.SolrResourceLoader.replaceClassLoader 
Adding 
'file:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/src/test-files/solr/collection1/lib/classes/'
 to classloader
   [junit4]   2> 1498049 T4277 oasc.SolrResourceLoader.replaceClassLoader 
Adding 
'file:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/src/test-files/solr/collection1/lib/README'
 to classloader
   [junit4]   2> 1498109 T4277 oasc.SolrConfig. Using Lucene 
MatchVersion: 5.0.0
   [junit4]   2> 1498154 T4277 oasc.SolrConfig. Loaded SolrConfig: 
solrconfig-update-processor-chains.xml
   [junit4]   2> 1498155 T4277 oass.IndexSchema.readSchema Reading Solr Schema 
from schema12.xml
   [junit4]   2> 1498161 T4277 oass.IndexSchema.readSchema [null] Schema 
name=test
   [junit4]   2> 1498401 T4277 oass.IndexSchema.readSchema default search field 
in schema is text
   [junit4]   2> 1498402 T4277 oass.IndexSchema.readSchema unique key field: id
   [junit4]   2> 1498403 T4277 oass.IndexSchema.loadCopyFields WARN Field text 
is not multivalued and destination for multiple copyFields (2)
   [junit4]   2> 1498411 T4277 oass.FileExchangeRateProvider.reload Reloading 
exchange rates from file currency.xml
   [junit4]   2> 1498414 T4277 oass.FileExchangeRateProvider.reload Reloading 
exchange rates from file currency.xml
   [junit4]   2> 1498417 T4277 oasc.SolrResourceLoader.locateSolrHome JNDI not 
configured for solr (NoInitialContextEx)
   [junit4]   2> 1498418 T4277 oasc.SolrResourceLoader.locateSolrHome using 
system property solr.solr.home: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/src/test-files/solr
   [junit4]   2> 1498418 T4277 oasc.SolrResourceLoader. new 
SolrResourceLoader for directory: 
'/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/src/test-files/solr/'
   [junit4]   2> 1498454 T4277 oasc.CoreContainer. New CoreContainer 
10369289
   [junit4]   2> 1498454 T4277 oasc.CoreContainer.load Loading cores into 
CoreContainer 
[instanceDir=/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/src/test-files/solr/]
   [junit4]   2> 1498455 T4277 oashc.HttpShardHandlerFactory.getParameter 
Setting socketTimeout to: 0
   [junit4]   2> 1498455 T4277 oashc.HttpShardHandlerFactory.getParameter 
Setting urlScheme to: null
   [junit4]   2> 1498455 T4277 oashc.HttpShardHandlerFactory.getParameter 
Setting connTimeout to: 0
   [junit4]   2> 1498455 T4277 oashc.HttpShardHandlerFactory.getParameter 
Setting maxConnectionsPerHost to: 20
   [junit4]   2> 1498456 T4277 oashc.HttpShardHandlerFactory.getParameter 
Setting corePoolSize to: 0
   [junit4]   2> 1498456 T4277 oashc.HttpShardHandlerFactory.getParameter 
Setting maximumPoolSize to: 2147483647
   [junit4]   2> 1498456 T4277 oashc.HttpShardHandlerFactory.getParameter 
Setting maxThreadIdleTime to: 5
   [junit4]   2> 1498456 T4277 oashc.HttpShardHandlerFactory.getParameter 
Setting sizeOfQueue to: -1
   [junit4]   2> 1498456 T4277 oashc.HttpShardHandlerFactory.getParameter 
Setting fairnessPolicy to: false
   [junit4]   2> 1498457 T4277 oasu.UpdateShardHandler. Creating 
UpdateShardHandler HTTP client with params: 
socketTimeout=3&connTimeout=3&retry=false
   [junit4]   2> 1498457 T4277 oasl.LogWatcher.createWatcher SLF4J impl is 
org.slf4j.impl.Log4jLoggerFactory
   [junit4]   2> 1498457 T4277 oasl.LogWatcher.newRegisteredLogWatcher 
Registering Log Listener [Log4j (org.slf4j.impl.Log4jLoggerFactory)]
   [junit4]   2> 1498458 T4277 oasc.CoreContainer.load Host Name: 
   [junit4]   2> 1498460 T4278 oasc.SolrResourceLoader. new 
SolrResourceLoader for dire

[jira] [Updated] (SOLR-5986) Don't allow runaway queries from harming Solr cluster health or search performance

2014-09-15 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-5986:
---
Attachment: SOLR-5986.patch

Added a Lucene test, fixed comments and renamed a few things.
Looks fine to me other than the decision of what package/module does this 
belong to.

I'm running the entire test suite now. It passes for the new tests but posting 
this before running the entire suite as that would be another 40 min.

> Don't allow runaway queries from harming Solr cluster health or search 
> performance
> --
>
> Key: SOLR-5986
> URL: https://issues.apache.org/jira/browse/SOLR-5986
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: Steve Davids
>Assignee: Anshum Gupta
>Priority: Critical
> Fix For: 4.10
>
> Attachments: SOLR-5986.patch, SOLR-5986.patch, SOLR-5986.patch, 
> SOLR-5986.patch, SOLR-5986.patch
>
>
> The intent of this ticket is to have all distributed search requests stop 
> wasting CPU cycles on requests that have already timed out or are so 
> complicated that they won't be able to execute. We have come across a case 
> where a nasty wildcard query within a proximity clause was causing the 
> cluster to enumerate terms for hours even though the query timeout was set to 
> minutes. This caused a noticeable slowdown within the system which made us 
> restart the replicas that happened to service that one request, the worst 
> case scenario are users with a relatively low zk timeout value will have 
> nodes start dropping from the cluster due to long GC pauses.
> [~amccurry] Built a mechanism into Apache Blur to help with the issue in 
> BLUR-142 (see commit comment for code, though look at the latest code on the 
> trunk for newer bug fixes).
> Solr should be able to either prevent these problematic queries from running 
> by some heuristic (possibly estimated size of heap usage) or be able to 
> execute a thread interrupt on all query threads once the time threshold is 
> met. This issue mirrors what others have discussed on the mailing list: 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/200903.mbox/%3c856ac15f0903272054q2dbdbd19kea3c5ba9e105b...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6507) various bugs using localparams with stats.field

2014-09-15 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-6507:
---
Attachment: SOLR-6507.patch

Not as many remaining test & code changes as i anticipated -- My recollection 
of how per-field overrides worked in facets when the "key" local param is used 
was totally wrong ... which simplified the logic & test permutations needed.

Updated pach for trunk with javadocs.  Hoping to commit & backport later today 
(or early tomorow) -- the backport changes should be fairly straight forward 
since this change didn't require any modifications to the DovValuesStast class 
(nor should any changes be required to UnInvertedField on 4x)



> various bugs using localparams with stats.field 
> 
>
> Key: SOLR-6507
> URL: https://issues.apache.org/jira/browse/SOLR-6507
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-6507.patch, SOLR-6507.patch
>
>
> StatsComponent has two different code paths for dealing with param parsing 
> depending on wether it's a single node request of a distributed request, 
> which results in two very differnet looking bugs (but in my opinion have the 
> same root cause: bogus local param parsing):
> * the documented local params for stats.field don't work on distributed stats 
> requests at all
> * per field "calcdistinct" doesn't work if localparms are used on single node 
> request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Move trunk to Java 8

2014-09-15 Thread Mark Miller
Most votes are procedural.

There are a couple specific veto votes:

1. You can veto a new committer or PMC member if you are on the PMC.

2. You can veto a specific code commit with a valid technical reason. This 
cannot be arbitrary or capricious - it has to be a specific technical reason 
and if that is addressed, things move on. This kind of veto has to be explicit, 
not just a standard “-1, im against this”.

Larger issues - which java versions do we support, what back compat policies, 
git or svn - these are majority votes.

Votes are a failure in a consensus community in general though. It’s best to 
have a discussion thread and a vote thread becomes a last resort when there is 
no way consensus will be reached.

- Mark

http://about.me/markrmiller

> On Sep 15, 2014, at 2:45 PM, david.w.smi...@gmail.com wrote:
> 
> Ryan,
> I’m unclear on what makes a “procedural vote” as such.  This seems to me to 
> be about code modifications — in a big way as it’s a large change to the 
> codebase.
> 
> ~ David 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Move trunk to Java 8

2014-09-15 Thread Ryan Ernst
David,

This thread was about the opinion of whether or not to move trunk to Java
8.  There was no actual code change.  If anyone has any technical issues
with the actual change (LUCENE-5950), please comment there.

Thanks
Ryan

On Mon, Sep 15, 2014 at 11:45 AM, david.w.smi...@gmail.com <
david.w.smi...@gmail.com> wrote:

> Ryan,
> I’m unclear on what makes a “procedural vote” as such.  This seems to me
> to be about code modifications — in a big way as it’s a large change to the
> codebase.
>
> ~ David
>


Re: [VOTE] Move trunk to Java 8

2014-09-15 Thread Benson Margulies
On Mon, Sep 15, 2014 at 2:45 PM, david.w.smi...@gmail.com <
david.w.smi...@gmail.com> wrote:

> Ryan,
> I’m unclear on what makes a “procedural vote” as such.  This seems to me
> to be about code modifications — in a big way as it’s a large change to the
> codebase.
>

David, one way out of this is that a commit is a commit. A sufficiently
unhappy PMC member can veto the commit, and thus force more discussion. If
no PMC members are sufficiently unhappy, it stands.


>
> ~ David
>


Re: [VOTE] Move trunk to Java 8

2014-09-15 Thread david.w.smi...@gmail.com
Ryan,
I’m unclear on what makes a “procedural vote” as such.  This seems to me to
be about code modifications — in a big way as it’s a large change to the
codebase.

~ David


[jira] [Comment Edited] (SOLR-6441) MoreLikeThis support for stopwords as in Lucene

2014-09-15 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134285#comment-14134285
 ] 

Steve Rowe edited comment on SOLR-6441 at 9/15/14 6:41 PM:
---

[~RamanaOpenSource], your patch changes the MLT Handler to always use the 
stopword list in {{lang/stopwords_en.txt}}, but that won't work a) for people 
who don't want to use any stopwords with the MLT handler; or b) for those who 
want to use a different stopword list; or c) for those who don't include the 
{{lang/}} directory in their configset.

This needs to be configurable in the handler, and the default should be to not 
load stopwords at all.

Also, before this can be committed, there needs to be tests demonstrating that 
the new functionality works.


was (Author: steve_rowe):
[~RamanaOpenSource], your changes the MLT Handler to always use the stopword 
list in {{lang/stopwords_en.txt}}, but that won't work a) for people who don't 
want to use any stopwords with the MLT handler; or b) for those who want to use 
a different stopword list; or c) for those who don't include the {{lang/}} 
directory in their configset.

This needs to be configurable in the handler, and the default should be to not 
load stopwords at all.

Also, before this can be committed, there needs to be tests demonstrating that 
the new functionality works.

> MoreLikeThis support for stopwords as in Lucene
> ---
>
> Key: SOLR-6441
> URL: https://issues.apache.org/jira/browse/SOLR-6441
> Project: Solr
>  Issue Type: Improvement
>  Components: MoreLikeThis
>Affects Versions: 4.9
>Reporter: Jeroen Steggink
>Priority: Minor
>  Labels: difficulty-easy, impact-low, workaround-exists
> Fix For: 4.10, 4.11
>
> Attachments: SOLR-6441.patch
>
>
> In the Lucene implementation of MoreLikeThis, it's possible to add a list of 
> stopwords which are considered "uninteresting" and are ignored.
> It would be a great addition to the MoreLikeThisHandler to be able to specify 
> a list of stopwords.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6441) MoreLikeThis support for stopwords as in Lucene

2014-09-15 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134285#comment-14134285
 ] 

Steve Rowe commented on SOLR-6441:
--

[~RamanaOpenSource], your changes the MLT Handler to always use the stopword 
list in {{lang/stopwords_en.txt}}, but that won't work a) for people who don't 
want to use any stopwords with the MLT handler; or b) for those who want to use 
a different stopword list; or c) for those who don't include the {{lang/}} 
directory in their configset.

This needs to be configurable in the handler, and the default should be to not 
load stopwords at all.

Also, before this can be committed, there needs to be tests demonstrating that 
the new functionality works.

> MoreLikeThis support for stopwords as in Lucene
> ---
>
> Key: SOLR-6441
> URL: https://issues.apache.org/jira/browse/SOLR-6441
> Project: Solr
>  Issue Type: Improvement
>  Components: MoreLikeThis
>Affects Versions: 4.9
>Reporter: Jeroen Steggink
>Priority: Minor
>  Labels: difficulty-easy, impact-low, workaround-exists
> Fix For: 4.10, 4.11
>
> Attachments: SOLR-6441.patch
>
>
> In the Lucene implementation of MoreLikeThis, it's possible to add a list of 
> stopwords which are considered "uninteresting" and are ignored.
> It would be a great addition to the MoreLikeThisHandler to be able to specify 
> a list of stopwords.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5950) Move to Java 8 in trunk

2014-09-15 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134275#comment-14134275
 ] 

Uwe Schindler commented on LUCENE-5950:
---

I will contact Rory from Oracle about that. We might get the fix into the next 
bugfix release.

Also the issue mentions a workaround to split some statement into two. Maybe we 
can fix this.

> Move to Java 8 in trunk
> ---
>
> Key: LUCENE-5950
> URL: https://issues.apache.org/jira/browse/LUCENE-5950
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ryan Ernst
> Attachments: LUCENE-5950.patch
>
>
> The dev list thread "[VOTE] Move trunk to Java 8" passed.
> http://markmail.org/thread/zcddxioz2yvsdqkc
> This issue is to actually move trunk to java 8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5951) Detect when index is on SSD and set dynamic defaults

2014-09-15 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-5951:
--

 Summary: Detect when index is on SSD and set dynamic defaults
 Key: LUCENE-5951
 URL: https://issues.apache.org/jira/browse/LUCENE-5951
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless


E.g. ConcurrentMergeScheduler should default maxMergeThreads to 3 if it's on 
SSD and 1 if it's on spinning disks.

I think the new NIO2 APIs can let us figure out which device we are mounted on, 
and from there maybe we can do os-specific stuff e.g. look at  
/sys/block/dev/queue/rotational to see if it's spinning storage or not ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Move trunk to Java 8

2014-09-15 Thread Ryan Ernst
For the record, I count eleven +1's, and two -1's

On Mon, Sep 15, 2014 at 10:56 AM, Ryan Ernst  wrote:

> This was a procedural/opinion poll vote:
> http://www.apache.org/foundation/voting.html
>
> Only majority is necessary.
>
> On Mon, Sep 15, 2014 at 10:45 AM, Jack Krupansky 
> wrote:
>
>>   ?? I thought there were two –1 votes.
>>
>> -- Jack Krupansky
>>
>>  *From:* Ryan Ernst 
>> *Sent:* Monday, September 15, 2014 10:19 AM
>> *To:* dev@lucene.apache.org
>> *Subject:* Re: [VOTE] Move trunk to Java 8
>>
>>  The vote passed.
>> https://issues.apache.org/jira/browse/LUCENE-5950
>>
>> On Fri, Sep 12, 2014 at 8:48 AM, Adrien Grand  wrote:
>>
>>> +1
>>>
>>> On Fri, Sep 12, 2014 at 5:41 PM, Ryan Ernst  wrote:
>>> > It has been 6 months since Java 8 was released.  It has proven to be
>>> > both stable (no issues like with the initial release of java 7) and
>>> > faster.  And there are a ton of features that would make our lives as
>>> > developers easier (and that can improve the quality of Lucene 5 when
>>> > it is eventually released).
>>> >
>>> > We should stay ahead of the curve, and move trunk to Java 8.
>>> >
>>> > -
>>> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> > For additional commands, e-mail: dev-h...@lucene.apache.org
>>> >
>>>
>>>
>>>
>>> --
>>> Adrien
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>>
>>
>
>


[jira] [Commented] (LUCENE-5949) Add Accountable.getChildResources()

2014-09-15 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134204#comment-14134204
 ] 

Dawid Weiss commented on LUCENE-5949:
-

Very cool. I just needed it very recently and had to inspect stuff manually.

> Add Accountable.getChildResources()
> ---
>
> Key: LUCENE-5949
> URL: https://issues.apache.org/jira/browse/LUCENE-5949
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Robert Muir
> Attachments: LUCENE-5949.patch
>
>
> Since Lucene 4.5, you can see how much memory lucene is using at a basic 
> level by looking at SegmentReader.ramBytesUsed()
> In 4.11 its already improved, you can pull the codec producers and get ram 
> usage split out by postings, norms, docvalues, stored fields, term vectors, 
> etc.
> Unfortunately most toString's are fairly useless, so you don't have any 
> insight further than that, even though behind the scenes its mostly just 
> adding up other Accountables.
> So instead if we can improve the toString's, and if an Accountable can return 
> its children, we can connect all the dots and you can easily diagnose/debug 
> issues and see what is going on. I know i've been frustrated with having to 
> hack up tons of System.out.printlns during development to see this stuff.
> So I think we should add this method to Accountable:
> {code}
>   /**
>* Returns nested resources of this class. 
>* The result should be a point-in-time snapshot (to avoid race conditions).
>* @see Accountables
>*/
>   // TODO: on java8 make this a default method returning emptyList
>   Iterable getChildResources();
> {code}
> We can also add a simple helper method for quick debugging 
> {{Accountables.toString(Accountable)}} to print the "tree", example output 
> for a lucene segment:
> {noformat}
> _5f(5.0.0):C8330469: 36.4 MB
> |-- postings [PerFieldPostings(formats=1)]: 8 MB
> |-- format 'Lucene41_0' 
> [BlockTreeTermsReader(fields=6,delegate=Lucene41PostingsReader(positions=true,payloads=false))]:
>  8 MB
> |-- field 'alternatenames' 
> [BlockTreeTerms(terms=3360242,postings=13779349,positions=17102250,docs=2876726)]:
>  945.2 KB
> |-- term index 
> [FST(input=BYTE1,output=ByteSequenceOutputs,packed=false,nodes=23318,arcs=66497)]:
>  945.1 KB
> |-- field 'asciiname' 
> [BlockTreeTerms(terms=2451266,postings=16849659,positions=16891234,docs=8329981)]:
>  686.1 KB
> |-- term index 
> [FST(input=BYTE1,output=ByteSequenceOutputs,packed=false,nodes=12976,arcs=44103)]:
>  686 KB
> |-- field 'geonameid' 
> [BlockTreeTerms(terms=8363399,postings=33321876,positions=-1,docs=8330469)]: 
> 1.3 MB
> |-- term index 
> [FST(input=BYTE1,output=ByteSequenceOutputs,packed=false,nodes=528,arcs=66225)]:
>  1.3 MB
> |-- field 'latitude' 
> [BlockTreeTerms(terms=8714542,postings=33321876,positions=-1,docs=8330469)]: 
> 1.7 MB
> |-- term index 
> [FST(input=BYTE1,output=ByteSequenceOutputs,packed=false,nodes=854,arcs=77300)]:
>  1.7 MB
> |-- field 'longitude' 
> [BlockTreeTerms(terms=11557222,postings=33321876,positions=-1,docs=8330469)]: 
> 2.6 MB
> |-- term index 
> [FST(input=BYTE1,output=ByteSequenceOutputs,packed=false,nodes=1577,arcs=114570)]:
>  2.6 MB
> |-- field 'name' 
> [BlockTreeTerms(terms=2598879,postings=16833071,positions=16874267,docs=8330325)]:
>  771.5 KB
> |-- term index 
> [FST(input=BYTE1,output=ByteSequenceOutputs,packed=false,nodes=13790,arcs=46514)]:
>  771.3 KB
> |-- delegate [Lucene41PostingsReader(positions=true,payloads=false)]: 
> 32 bytes
> |-- norms [Lucene49NormsProducer(fields=3,active=3)]: 15.9 MB
> |-- field 'alternatenames' [byte array]: 7.9 MB
> |-- field 'asciiname' [table compressed 
> [Packed64SingleBlock4(bitsPerValue=4,size=8330469,blocks=520655)]]: 4 MB
> |-- field 'name' [table compressed 
> [Packed64SingleBlock4(bitsPerValue=4,size=8330469,blocks=520655)]]: 4 MB
> |-- docvalues [PerFieldDocValues(formats=1)]: 12.1 MB
> |-- format 'Lucene410_0' [Lucene410DocValuesProducer(fields=5)]: 12.1 MB
> |-- addresses field 'alternatenames' 
> [MonotonicBlockPackedReader(blocksize=16384,size=407026,avgBPV=16)]: 808.5 KB
> |-- addresses field 'asciiname' 
> [MonotonicBlockPackedReader(blocksize=16384,size=330528,avgBPV=17)]: 698.6 KB
> |-- addresses field 'name' 
> [MonotonicBlockPackedReader(blocksize=16384,size=335020,avgBPV=17)]: 703.7 KB
> |-- ord index field 'alternatenames' 
> [MonotonicBlockPackedReader(blocksize=16384,size=8330470,avgBPV=9)]: 9.8 MB
> |-- reverse index field 'alternatenames' 
> [ReverseTermsIndex(size=6360)]: 77.9 KB
> |-- term bytes [PagedBytes(blocksize=32768)]: 67.7 KB
> |-- term addresses 
> [MonotonicBlockPackedR

Re: [VOTE] Move trunk to Java 8

2014-09-15 Thread Ryan Ernst
This was a procedural/opinion poll vote:
http://www.apache.org/foundation/voting.html

Only majority is necessary.

On Mon, Sep 15, 2014 at 10:45 AM, Jack Krupansky 
wrote:

>   ?? I thought there were two –1 votes.
>
> -- Jack Krupansky
>
>  *From:* Ryan Ernst 
> *Sent:* Monday, September 15, 2014 10:19 AM
> *To:* dev@lucene.apache.org
> *Subject:* Re: [VOTE] Move trunk to Java 8
>
>  The vote passed.
> https://issues.apache.org/jira/browse/LUCENE-5950
>
> On Fri, Sep 12, 2014 at 8:48 AM, Adrien Grand  wrote:
>
>> +1
>>
>> On Fri, Sep 12, 2014 at 5:41 PM, Ryan Ernst  wrote:
>> > It has been 6 months since Java 8 was released.  It has proven to be
>> > both stable (no issues like with the initial release of java 7) and
>> > faster.  And there are a ton of features that would make our lives as
>> > developers easier (and that can improve the quality of Lucene 5 when
>> > it is eventually released).
>> >
>> > We should stay ahead of the curve, and move trunk to Java 8.
>> >
>> > -
>> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> > For additional commands, e-mail: dev-h...@lucene.apache.org
>> >
>>
>>
>>
>> --
>> Adrien
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>
>


Re: [VOTE] Move trunk to Java 8

2014-09-15 Thread Jack Krupansky
?? I thought there were two –1 votes.

-- Jack Krupansky

From: Ryan Ernst 
Sent: Monday, September 15, 2014 10:19 AM
To: dev@lucene.apache.org 
Subject: Re: [VOTE] Move trunk to Java 8

The vote passed. 
https://issues.apache.org/jira/browse/LUCENE-5950


On Fri, Sep 12, 2014 at 8:48 AM, Adrien Grand  wrote:

  +1


  On Fri, Sep 12, 2014 at 5:41 PM, Ryan Ernst  wrote:
  > It has been 6 months since Java 8 was released.  It has proven to be
  > both stable (no issues like with the initial release of java 7) and
  > faster.  And there are a ton of features that would make our lives as
  > developers easier (and that can improve the quality of Lucene 5 when
  > it is eventually released).
  >
  > We should stay ahead of the curve, and move trunk to Java 8.
  >

  > -
  > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
  > For additional commands, e-mail: dev-h...@lucene.apache.org
  >



  --
  Adrien

  -
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
  For additional commands, e-mail: dev-h...@lucene.apache.org




[jira] [Commented] (SOLR-6326) ExternalFileFieldReloader and commits

2014-09-15 Thread Peter Keegan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134170#comment-14134170
 ] 

Peter Keegan commented on SOLR-6326:


Hi Ramana,

The use case is:

1. A SolrJ client updates the main index (and replicas) and issues a 
distributed commit at regular intervals.
2. Another component updates the external files at other intervals.

Usually, the commits from (1) result in a new searcher which triggers the 
org.apache.solr.schema.ExternalFileFieldReloader, but only if there were 
changes to the main index.

Using ReloadCacheRequestHandler in (2) above would result in the loss of 
index/replica synchronization provided by the distributed commit in (1), and 
reloading the core is slow and overkill.

Thanks,
Peter

> ExternalFileFieldReloader and commits
> -
>
> Key: SOLR-6326
> URL: https://issues.apache.org/jira/browse/SOLR-6326
> Project: Solr
>  Issue Type: Bug
>Reporter: Peter Keegan
>  Labels: difficulty-medium, externalfilefield, impact-medium
>
> When there are multiple 'external file field' files available, Solr will 
> reload the last one (lexicographically) with a commit, but only if changes 
> were made to the index. Otherwise, it skips the reload and logs: "No 
> uncommitted changes. Skipping IW.commit." 
> IndexWriter.hasUncommittedChanges() returns false, but new external files 
> should be reloaded with commits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1624773 - in /lucene/dev/branches/branch_4x: ./ lucene/ lucene/core/ lucene/core/src/java/org/apache/lucene/codecs/lucene46/Lucene46SegmentInfoWriter.java

2014-09-15 Thread Michael McCandless
I'll open an issue for this; I think it should block 4.10.1.

Mike McCandless

http://blog.mikemccandless.com

On Sun, Sep 14, 2014 at 8:21 AM, Robert Muir  wrote:
> Also if there are going to be checks against this, if its really going
> to become the primary entry point for deciding the version of a
> segment and what exception to throw and so on, then I strongly
> disagree with string encoding.
>
> In my opinion we already have such a scheme figured out (CodecUtil
> methods) with exception handling and everything...
>
> On Sun, Sep 14, 2014 at 7:58 AM, Robert Muir  wrote:
>> I agree, can we remove all the checks here? They are bogus in a few ways:
>>
>> 1. Version.parse claims its forwards compatible, but the ctor throws
>> these exceptions.
>> 2. These exceptions are thrown without indicating what the actual value was.
>> 3. These exceptions are thrown with no context (the toString of the
>> input file itself).
>>
>> So, I don't think Version should do these checks, at least, if it
>> wants to do them, it needs to step up to the plate and do them
>> correctly.
>>
>> On Sun, Sep 14, 2014 at 7:19 AM, Uwe Schindler  wrote:
>>> Especially, as I said in my previous email: ist in my opinion bad to throw 
>>> IllegalArgumentException on version's parse if it is a vlaid version.
>>>
>>> I tried it locally and created an Index with version 6.0 written to disk (I 
>>> hacked it). If I try top open it with IndexReader of Lucene 4, so it should 
>>> throw IndexTooNewException! But in fact it did not, because Version.parse() 
>>> failed earlier! So we are inconsistent with Exceptions! It should in fact 
>>> really throw IndexFormatTooNewException!
>>>
>>> In my opinion, the bounds checks in the Version ctor should be "optional", 
>>> so when parsing versions from index files we have a chance to throw the 
>>> "correct exception".
>>>
>>> Uwe
>>>
>>> -
>>> Uwe Schindler
>>> H.-H.-Meier-Allee 63, D-28213 Bremen
>>> http://www.thetaphi.de
>>> eMail: u...@thetaphi.de
>>>
>>>
 -Original Message-
 From: Michael McCandless [mailto:luc...@mikemccandless.com]
 Sent: Sunday, September 14, 2014 12:46 PM
 To: Lucene/Solr dev
 Subject: Re: svn commit: r1624773 - in /lucene/dev/branches/branch_4x: ./
 lucene/ lucene/core/
 lucene/core/src/java/org/apache/lucene/codecs/lucene46/Lucene46Segme
 ntInfoWriter.java

 On Sat, Sep 13, 2014 at 10:23 PM, Ryan Ernst  wrote:
 > How would the Version be constructed with an invalid major version,
 > given this exact check in the constructor (and the fact that the only
 > way to construct is through Version.parse)?

 I think it makes sense to be defensive here and have the check in two 
 places.

 This also guards against any future changes to Version that somehow allow
 this ... it sure look like Version can't ever be created in an "out of 
 bounds"
 state today, but who knows tomorrow ...

 Mike McCandless

 http://blog.mikemccandless.com

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org
>>>
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Move trunk to Java 8

2014-09-15 Thread Ryan Ernst
The vote passed.
https://issues.apache.org/jira/browse/LUCENE-5950

On Fri, Sep 12, 2014 at 8:48 AM, Adrien Grand  wrote:

> +1
>
> On Fri, Sep 12, 2014 at 5:41 PM, Ryan Ernst  wrote:
> > It has been 6 months since Java 8 was released.  It has proven to be
> > both stable (no issues like with the initial release of java 7) and
> > faster.  And there are a ton of features that would make our lives as
> > developers easier (and that can improve the quality of Lucene 5 when
> > it is eventually released).
> >
> > We should stay ahead of the curve, and move trunk to Java 8.
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
>
>
>
> --
> Adrien
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Updated] (LUCENE-5950) Move to Java 8 in trunk

2014-09-15 Thread Ryan Ernst (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Ernst updated LUCENE-5950:
---
Attachment: LUCENE-5950.patch

This patch moves trunk to java 8.  However, it doesn't yet work! Java 8u20 
appears to have a bug when trying to compile with source as 1.8:
https://bugs.openjdk.java.net/browse/JDK-8056984?page=com.atlassian.streams.streams-jira-plugin:activity-stream-issue-tab

Still, I am putting up this patch so that when u40 is released, we can be 
ready.  This patch was against git hash 
{{611fa4956377d0448f7038f70d3be36ec882c1e6}} and svn revision {{1624882}}.

> Move to Java 8 in trunk
> ---
>
> Key: LUCENE-5950
> URL: https://issues.apache.org/jira/browse/LUCENE-5950
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ryan Ernst
> Attachments: LUCENE-5950.patch
>
>
> The dev list thread "[VOTE] Move trunk to Java 8" passed.
> http://markmail.org/thread/zcddxioz2yvsdqkc
> This issue is to actually move trunk to java 8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6326) ExternalFileFieldReloader and commits

2014-09-15 Thread Ramana (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134144#comment-14134144
 ] 

Ramana commented on SOLR-6326:
--

My previous issue is resolved. Now i am able to reload the external file field 
changes.

Basically i have 3 fields defined in my schema.xml with extrenal file field 
type like below:







I created one file for each field in the data directory. When modified all the 
external files when the server is up, I am able to see all the changes by 
following below steps:

1) http://localhost:8983/solr/reloadCache

2) http://localhost:8983/solr/select?q=*&fl=id,score,field(testField)


Peter,
Could you please give an example here about the problem.That will help me 
understand the issue better.

Thanks,
Ramana.





> ExternalFileFieldReloader and commits
> -
>
> Key: SOLR-6326
> URL: https://issues.apache.org/jira/browse/SOLR-6326
> Project: Solr
>  Issue Type: Bug
>Reporter: Peter Keegan
>  Labels: difficulty-medium, externalfilefield, impact-medium
>
> When there are multiple 'external file field' files available, Solr will 
> reload the last one (lexicographically) with a commit, but only if changes 
> were made to the index. Otherwise, it skips the reload and logs: "No 
> uncommitted changes. Skipping IW.commit." 
> IndexWriter.hasUncommittedChanges() returns false, but new external files 
> should be reloaded with commits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5950) Move to Java 8 in trunk

2014-09-15 Thread Ryan Ernst (JIRA)
Ryan Ernst created LUCENE-5950:
--

 Summary: Move to Java 8 in trunk
 Key: LUCENE-5950
 URL: https://issues.apache.org/jira/browse/LUCENE-5950
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Ryan Ernst


The dev list thread "[VOTE] Move trunk to Java 8" passed.
http://markmail.org/thread/zcddxioz2yvsdqkc

This issue is to actually move trunk to java 8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6520) Documentation web page is missing link to live Solr Reference Guide

2014-09-15 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134139#comment-14134139
 ] 

Alexandre Rafalovitch commented on SOLR-6520:
-

I could argue about this, except you are linking to the WIKI from those same 
pages already. So, all you are doing, is giving a preferential treatment to 
WIKI instead. And, if the users do - somehow - manage to find the reference 
guide from the Google search, the problem you described has not gone away.

So, I would still say, link to the reference guide wherever you have a wiki 
link right now and mention the future-looking status in brackets. Link to the 
PDF for the active version next to it, to make it easier for the people on the 
old version.

I thought the current state was due to somebody missing the obvious. Now, it 
looks like a case of premature DE-optimization. :-)

> Documentation web page is missing link to live Solr Reference Guide
> ---
>
> Key: SOLR-6520
> URL: https://issues.apache.org/jira/browse/SOLR-6520
> Project: Solr
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 4.10
> Environment: web
>Reporter: Alexandre Rafalovitch
>Priority: Minor
>  Labels: documentation, website
>
> The [official document page for 
> Solr|https://lucene.apache.org/solr/documentation.html] is missing the link 
> to the live Solr Reference Guide. Only the link to PDF is there. In fact, one 
> has to go to the WIKI, it seems to find the link. 
> It is also not linked from [the release-specific documentation 
> page|https://lucene.apache.org/solr/4_10_0/index.html] either.
> This means the search engines do not easily discover the new content and it 
> does not show up in searches for when people look for information. It also 
> means people may hesitate to look at it, if they have to download the whole 
> PDF first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5949) Add Accountable.getChildResources()

2014-09-15 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14134135#comment-14134135
 ] 

Michael McCandless commented on LUCENE-5949:


+1, this looks wonderful!

Now there is no more mystery left when users are confused about what's using 
RAM in Lucene...

> Add Accountable.getChildResources()
> ---
>
> Key: LUCENE-5949
> URL: https://issues.apache.org/jira/browse/LUCENE-5949
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Robert Muir
> Attachments: LUCENE-5949.patch
>
>
> Since Lucene 4.5, you can see how much memory lucene is using at a basic 
> level by looking at SegmentReader.ramBytesUsed()
> In 4.11 its already improved, you can pull the codec producers and get ram 
> usage split out by postings, norms, docvalues, stored fields, term vectors, 
> etc.
> Unfortunately most toString's are fairly useless, so you don't have any 
> insight further than that, even though behind the scenes its mostly just 
> adding up other Accountables.
> So instead if we can improve the toString's, and if an Accountable can return 
> its children, we can connect all the dots and you can easily diagnose/debug 
> issues and see what is going on. I know i've been frustrated with having to 
> hack up tons of System.out.printlns during development to see this stuff.
> So I think we should add this method to Accountable:
> {code}
>   /**
>* Returns nested resources of this class. 
>* The result should be a point-in-time snapshot (to avoid race conditions).
>* @see Accountables
>*/
>   // TODO: on java8 make this a default method returning emptyList
>   Iterable getChildResources();
> {code}
> We can also add a simple helper method for quick debugging 
> {{Accountables.toString(Accountable)}} to print the "tree", example output 
> for a lucene segment:
> {noformat}
> _5f(5.0.0):C8330469: 36.4 MB
> |-- postings [PerFieldPostings(formats=1)]: 8 MB
> |-- format 'Lucene41_0' 
> [BlockTreeTermsReader(fields=6,delegate=Lucene41PostingsReader(positions=true,payloads=false))]:
>  8 MB
> |-- field 'alternatenames' 
> [BlockTreeTerms(terms=3360242,postings=13779349,positions=17102250,docs=2876726)]:
>  945.2 KB
> |-- term index 
> [FST(input=BYTE1,output=ByteSequenceOutputs,packed=false,nodes=23318,arcs=66497)]:
>  945.1 KB
> |-- field 'asciiname' 
> [BlockTreeTerms(terms=2451266,postings=16849659,positions=16891234,docs=8329981)]:
>  686.1 KB
> |-- term index 
> [FST(input=BYTE1,output=ByteSequenceOutputs,packed=false,nodes=12976,arcs=44103)]:
>  686 KB
> |-- field 'geonameid' 
> [BlockTreeTerms(terms=8363399,postings=33321876,positions=-1,docs=8330469)]: 
> 1.3 MB
> |-- term index 
> [FST(input=BYTE1,output=ByteSequenceOutputs,packed=false,nodes=528,arcs=66225)]:
>  1.3 MB
> |-- field 'latitude' 
> [BlockTreeTerms(terms=8714542,postings=33321876,positions=-1,docs=8330469)]: 
> 1.7 MB
> |-- term index 
> [FST(input=BYTE1,output=ByteSequenceOutputs,packed=false,nodes=854,arcs=77300)]:
>  1.7 MB
> |-- field 'longitude' 
> [BlockTreeTerms(terms=11557222,postings=33321876,positions=-1,docs=8330469)]: 
> 2.6 MB
> |-- term index 
> [FST(input=BYTE1,output=ByteSequenceOutputs,packed=false,nodes=1577,arcs=114570)]:
>  2.6 MB
> |-- field 'name' 
> [BlockTreeTerms(terms=2598879,postings=16833071,positions=16874267,docs=8330325)]:
>  771.5 KB
> |-- term index 
> [FST(input=BYTE1,output=ByteSequenceOutputs,packed=false,nodes=13790,arcs=46514)]:
>  771.3 KB
> |-- delegate [Lucene41PostingsReader(positions=true,payloads=false)]: 
> 32 bytes
> |-- norms [Lucene49NormsProducer(fields=3,active=3)]: 15.9 MB
> |-- field 'alternatenames' [byte array]: 7.9 MB
> |-- field 'asciiname' [table compressed 
> [Packed64SingleBlock4(bitsPerValue=4,size=8330469,blocks=520655)]]: 4 MB
> |-- field 'name' [table compressed 
> [Packed64SingleBlock4(bitsPerValue=4,size=8330469,blocks=520655)]]: 4 MB
> |-- docvalues [PerFieldDocValues(formats=1)]: 12.1 MB
> |-- format 'Lucene410_0' [Lucene410DocValuesProducer(fields=5)]: 12.1 MB
> |-- addresses field 'alternatenames' 
> [MonotonicBlockPackedReader(blocksize=16384,size=407026,avgBPV=16)]: 808.5 KB
> |-- addresses field 'asciiname' 
> [MonotonicBlockPackedReader(blocksize=16384,size=330528,avgBPV=17)]: 698.6 KB
> |-- addresses field 'name' 
> [MonotonicBlockPackedReader(blocksize=16384,size=335020,avgBPV=17)]: 703.7 KB
> |-- ord index field 'alternatenames' 
> [MonotonicBlockPackedReader(blocksize=16384,size=8330470,avgBPV=9)]: 9.8 MB
> |-- reverse index field 'alternatenames' 
> [ReverseTermsIndex(size=6360)]: 77.9 KB
> |-- term bytes [PagedBytes(blocksize=32768)]: 67.7 KB

[jira] [Updated] (LUCENE-5949) Add Accountable.getChildResources()

2014-09-15 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-5949:

Attachment: LUCENE-5949.patch

Patch. its somewhat large since it includes improved toString()'s everywhere in 
the codec api (which IMO is a good thing in general).

Additionally I found some crabs (missing codec checks in old term vectors 
codec, broken hashing on fieldinfo with MemoryDV, etc) and fixed those here too.

I added assertions to AssertingCodec and to TestUtil.checkXXX to ensure that 
toString() works, that the returned iterators are immutable, and that the 
implementations work.

> Add Accountable.getChildResources()
> ---
>
> Key: LUCENE-5949
> URL: https://issues.apache.org/jira/browse/LUCENE-5949
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Robert Muir
> Attachments: LUCENE-5949.patch
>
>
> Since Lucene 4.5, you can see how much memory lucene is using at a basic 
> level by looking at SegmentReader.ramBytesUsed()
> In 4.11 its already improved, you can pull the codec producers and get ram 
> usage split out by postings, norms, docvalues, stored fields, term vectors, 
> etc.
> Unfortunately most toString's are fairly useless, so you don't have any 
> insight further than that, even though behind the scenes its mostly just 
> adding up other Accountables.
> So instead if we can improve the toString's, and if an Accountable can return 
> its children, we can connect all the dots and you can easily diagnose/debug 
> issues and see what is going on. I know i've been frustrated with having to 
> hack up tons of System.out.printlns during development to see this stuff.
> So I think we should add this method to Accountable:
> {code}
>   /**
>* Returns nested resources of this class. 
>* The result should be a point-in-time snapshot (to avoid race conditions).
>* @see Accountables
>*/
>   // TODO: on java8 make this a default method returning emptyList
>   Iterable getChildResources();
> {code}
> We can also add a simple helper method for quick debugging 
> {{Accountables.toString(Accountable)}} to print the "tree", example output 
> for a lucene segment:
> {noformat}
> _5f(5.0.0):C8330469: 36.4 MB
> |-- postings [PerFieldPostings(formats=1)]: 8 MB
> |-- format 'Lucene41_0' 
> [BlockTreeTermsReader(fields=6,delegate=Lucene41PostingsReader(positions=true,payloads=false))]:
>  8 MB
> |-- field 'alternatenames' 
> [BlockTreeTerms(terms=3360242,postings=13779349,positions=17102250,docs=2876726)]:
>  945.2 KB
> |-- term index 
> [FST(input=BYTE1,output=ByteSequenceOutputs,packed=false,nodes=23318,arcs=66497)]:
>  945.1 KB
> |-- field 'asciiname' 
> [BlockTreeTerms(terms=2451266,postings=16849659,positions=16891234,docs=8329981)]:
>  686.1 KB
> |-- term index 
> [FST(input=BYTE1,output=ByteSequenceOutputs,packed=false,nodes=12976,arcs=44103)]:
>  686 KB
> |-- field 'geonameid' 
> [BlockTreeTerms(terms=8363399,postings=33321876,positions=-1,docs=8330469)]: 
> 1.3 MB
> |-- term index 
> [FST(input=BYTE1,output=ByteSequenceOutputs,packed=false,nodes=528,arcs=66225)]:
>  1.3 MB
> |-- field 'latitude' 
> [BlockTreeTerms(terms=8714542,postings=33321876,positions=-1,docs=8330469)]: 
> 1.7 MB
> |-- term index 
> [FST(input=BYTE1,output=ByteSequenceOutputs,packed=false,nodes=854,arcs=77300)]:
>  1.7 MB
> |-- field 'longitude' 
> [BlockTreeTerms(terms=11557222,postings=33321876,positions=-1,docs=8330469)]: 
> 2.6 MB
> |-- term index 
> [FST(input=BYTE1,output=ByteSequenceOutputs,packed=false,nodes=1577,arcs=114570)]:
>  2.6 MB
> |-- field 'name' 
> [BlockTreeTerms(terms=2598879,postings=16833071,positions=16874267,docs=8330325)]:
>  771.5 KB
> |-- term index 
> [FST(input=BYTE1,output=ByteSequenceOutputs,packed=false,nodes=13790,arcs=46514)]:
>  771.3 KB
> |-- delegate [Lucene41PostingsReader(positions=true,payloads=false)]: 
> 32 bytes
> |-- norms [Lucene49NormsProducer(fields=3,active=3)]: 15.9 MB
> |-- field 'alternatenames' [byte array]: 7.9 MB
> |-- field 'asciiname' [table compressed 
> [Packed64SingleBlock4(bitsPerValue=4,size=8330469,blocks=520655)]]: 4 MB
> |-- field 'name' [table compressed 
> [Packed64SingleBlock4(bitsPerValue=4,size=8330469,blocks=520655)]]: 4 MB
> |-- docvalues [PerFieldDocValues(formats=1)]: 12.1 MB
> |-- format 'Lucene410_0' [Lucene410DocValuesProducer(fields=5)]: 12.1 MB
> |-- addresses field 'alternatenames' 
> [MonotonicBlockPackedReader(blocksize=16384,size=407026,avgBPV=16)]: 808.5 KB
> |-- addresses field 'asciiname' 
> [MonotonicBlockPackedReader(blocksize=16384,size=330528,avgBPV=17)]: 698.6 KB
> |-- addresses field 'name' 
> [MonotonicBlockPackedReader(blocksize=16384,size=335020,

  1   2   >